Aetna uses IBM's DB2 Analytics Accelerator to improve the performance of long-running reports on its DB2 database. The accelerator offloads eligible queries to the Netezza appliance, reducing query times from hours to seconds. Aetna saw a 4x compression rate on its data and was able to load 1.5 billion rows in 15 minutes. Reports that previously timed out after 82 minutes now return results in 27 seconds, improving business users' ability to analyze data.
The NRB Group mainframe day 2021 - IBM Z-Strategy & Roadmap - Adam John Sturg...NRB
This presentation is about the IBM Z Software Strategy. Key points of IBM's strategy for the platform, including Hardware and Software with a quick view on future roadmaps.
The NRB Group mainframe day 2021 - IBM Z-Strategy & Roadmap - Adam John Sturg...NRB
This presentation is about the IBM Z Software Strategy. Key points of IBM's strategy for the platform, including Hardware and Software with a quick view on future roadmaps.
Syllabus of Streaming Courses in mainframe assembler and z/OS internals for everyone who interested to become a real systems programmer or system-level software developer for IBM mainframe platform, especially in z/OS system environment.
Leveraging the power of SolrCloud and Spark with OpenShiftQAware GmbH
Kubernetes/Cloud-Native-Meetup September 2018, Munich : Talk by Franz Wimmer (@zalintyre, Software Engineer at QAware)
Abstract: One of the most commonly used big data processing frameworks is Apache Spark. Spark manages to process large datasets with parallelization. Solr is a search platform based on Lucene. Solr can be distributed across a cluster using ZooKeeper for configuration management. Both applications can be combined to create performant Big Data applications.
But what if you want to scale up horizonally and add a node? In a manual setup, you'd have to install the new node manually. Cluster orchestrators like OpenShift claim to solve this problem.
This talk shows how to put Spark, Solr and ZooKeeper into containers, which can then be scaled individually inside a cluster using OpenShift. We will cover OpenShift details like DeploymentConfigs, StatefulSets, Services, Routes and Persistent Volumes and install a complete, failsafe and horizontally scaleable SolrCloud / Spark / Zookeeper cluster in seconds.
You will also learn about the drawbacks and pitfalls of running Big Data applications inside an OpenShift cluster.
Mainframe Fine Tuning - Fabio Massimo OttavianiNRB
Mainframe cost is heavily dependent on the real CPU load through the IBM mechanism for charging Software by the 4 hours rolling average. By precisely monitoring various loads (as detecting rapidly abnormal CPU peaks, optimizing Disk I/O and using new features as Large pages), EPV provides a toolset to reduce the CPU load (and hence the IBM Software charging) while better using it.
You manage a mainframe environment that runs one
or more of your business’ mission critical applications.
Things are good; security, performance and reliability are
just where they should be. But when you think about
your long term staffing strategy, you cringe, because
most of the people on your mainframe staff are about to retire.
The NRB Group mainframe day 2021 - Containerisation on Z - Paul Pilotto - Seb...NRB
Containerization on IBM Z : the notion of containers, their principles, how it works, their benefits on IBM Z and the reasons to adopt containers.
The second part of the presentation focuses on the various solutions available on IBM Z to run and execute your containers at the best place, on IBM Z !
How to Leverage Mainframe Data with Hadoop: Bridging the Gap Between Big Iron...Precisely
In this presentation from Syncsort and Cloudera, you'll learn how to bridge the technical, skill and cost gaps between mainframe and Hadoop. We discuss the top challenges of ingesting and processing mainframe data in Hadoop – and how to solve them.
Presented at workshop by Apalia on buiiding a Private Cloud : our feedback describing the advantages of a private cloud when used to operate a legacy Cobol application migrated from a mainframe to Java & Linux.
Nrb Mainframe Day - Nrb Mainframe Strategy - Pascal LaffineurNRB
NRB is the Belgian leading mainframe services provider, with a capacity of more than 24.000 MIPS operated from its two tier 3+ data centres, a mainframe development team of more than 200 collaborators and specialists consultants accompanying its customers through their mainframe modernisation process. Pascal Laffineur, CEO of The NRB Group, presents the company’s mainframe strategy showing constant investments and a strong believe in the current and future potential of the Mainframe.
The NRB Group mainframe day 2021 - Security On Z - Guillaume HoareauNRB
Mainframe are a mainstay—especially for cyber security and compliance. IBM improved the mainframe release after release to help todays organizations in their security journey to protect their mission critical workloads. Open and resilient, Mainframe architecture and design evolve in order to face threats of the future.
MongoDB Linux Porting, Performance Measurements and and Scaling Advantage usi...MongoDB
MongoDB has been ported onto Linux on z Systems. MongoDB Performance benefits from the superior single thread performance of System z processor and system design. The goal of the presentation is to demonstrate the value of running MongoDB on Linux for Systems z by comparing scaling behavior of MongoDB sharding on x86 and mainframe. The presentation will give details on performance numbers and scaling behavior of MongoDB on Systems z versus Intel based servers. The presentation will also sketch how MongoDB sharding on Linux on z Systems can be dockerized to facilitate the setup.
Nrb Mainframe Day - z Legacy Innovation - New Architecture And Api Services -...NRB
Michaël Boeckx Chief Technological Officer of NRB and Sébastien Georis Information System architect at NRB, explain NRB’s approach to a mainframe modernisation journey, covering the modernisation roadmap, a reference (microservices multicloud) architecture and solution building blocks, all taken care of by a team of more than 200 mainframe architects, analysts, developers, system engineers and consultants.
Planning Cloud Migrations: It's all about the destinationArvind Viswanathan
You've heard the old adage that "It's not about the destination; it's about the journey." For cloud migrations and modernizations, however, it's all about the destination. Picking the right destination or cloud platform for your workload can make all the difference between success and failure in terms of achieving your cloud migration objectives. This session covered the considerations, technologies and approaches that can be used to pick the right cloud targets and achieve the right balance between migration and modernization.
Syllabus of Streaming Courses in mainframe assembler and z/OS internals for everyone who interested to become a real systems programmer or system-level software developer for IBM mainframe platform, especially in z/OS system environment.
Leveraging the power of SolrCloud and Spark with OpenShiftQAware GmbH
Kubernetes/Cloud-Native-Meetup September 2018, Munich : Talk by Franz Wimmer (@zalintyre, Software Engineer at QAware)
Abstract: One of the most commonly used big data processing frameworks is Apache Spark. Spark manages to process large datasets with parallelization. Solr is a search platform based on Lucene. Solr can be distributed across a cluster using ZooKeeper for configuration management. Both applications can be combined to create performant Big Data applications.
But what if you want to scale up horizonally and add a node? In a manual setup, you'd have to install the new node manually. Cluster orchestrators like OpenShift claim to solve this problem.
This talk shows how to put Spark, Solr and ZooKeeper into containers, which can then be scaled individually inside a cluster using OpenShift. We will cover OpenShift details like DeploymentConfigs, StatefulSets, Services, Routes and Persistent Volumes and install a complete, failsafe and horizontally scaleable SolrCloud / Spark / Zookeeper cluster in seconds.
You will also learn about the drawbacks and pitfalls of running Big Data applications inside an OpenShift cluster.
Mainframe Fine Tuning - Fabio Massimo OttavianiNRB
Mainframe cost is heavily dependent on the real CPU load through the IBM mechanism for charging Software by the 4 hours rolling average. By precisely monitoring various loads (as detecting rapidly abnormal CPU peaks, optimizing Disk I/O and using new features as Large pages), EPV provides a toolset to reduce the CPU load (and hence the IBM Software charging) while better using it.
You manage a mainframe environment that runs one
or more of your business’ mission critical applications.
Things are good; security, performance and reliability are
just where they should be. But when you think about
your long term staffing strategy, you cringe, because
most of the people on your mainframe staff are about to retire.
The NRB Group mainframe day 2021 - Containerisation on Z - Paul Pilotto - Seb...NRB
Containerization on IBM Z : the notion of containers, their principles, how it works, their benefits on IBM Z and the reasons to adopt containers.
The second part of the presentation focuses on the various solutions available on IBM Z to run and execute your containers at the best place, on IBM Z !
How to Leverage Mainframe Data with Hadoop: Bridging the Gap Between Big Iron...Precisely
In this presentation from Syncsort and Cloudera, you'll learn how to bridge the technical, skill and cost gaps between mainframe and Hadoop. We discuss the top challenges of ingesting and processing mainframe data in Hadoop – and how to solve them.
Presented at workshop by Apalia on buiiding a Private Cloud : our feedback describing the advantages of a private cloud when used to operate a legacy Cobol application migrated from a mainframe to Java & Linux.
Nrb Mainframe Day - Nrb Mainframe Strategy - Pascal LaffineurNRB
NRB is the Belgian leading mainframe services provider, with a capacity of more than 24.000 MIPS operated from its two tier 3+ data centres, a mainframe development team of more than 200 collaborators and specialists consultants accompanying its customers through their mainframe modernisation process. Pascal Laffineur, CEO of The NRB Group, presents the company’s mainframe strategy showing constant investments and a strong believe in the current and future potential of the Mainframe.
The NRB Group mainframe day 2021 - Security On Z - Guillaume HoareauNRB
Mainframe are a mainstay—especially for cyber security and compliance. IBM improved the mainframe release after release to help todays organizations in their security journey to protect their mission critical workloads. Open and resilient, Mainframe architecture and design evolve in order to face threats of the future.
MongoDB Linux Porting, Performance Measurements and and Scaling Advantage usi...MongoDB
MongoDB has been ported onto Linux on z Systems. MongoDB Performance benefits from the superior single thread performance of System z processor and system design. The goal of the presentation is to demonstrate the value of running MongoDB on Linux for Systems z by comparing scaling behavior of MongoDB sharding on x86 and mainframe. The presentation will give details on performance numbers and scaling behavior of MongoDB on Systems z versus Intel based servers. The presentation will also sketch how MongoDB sharding on Linux on z Systems can be dockerized to facilitate the setup.
Nrb Mainframe Day - z Legacy Innovation - New Architecture And Api Services -...NRB
Michaël Boeckx Chief Technological Officer of NRB and Sébastien Georis Information System architect at NRB, explain NRB’s approach to a mainframe modernisation journey, covering the modernisation roadmap, a reference (microservices multicloud) architecture and solution building blocks, all taken care of by a team of more than 200 mainframe architects, analysts, developers, system engineers and consultants.
Planning Cloud Migrations: It's all about the destinationArvind Viswanathan
You've heard the old adage that "It's not about the destination; it's about the journey." For cloud migrations and modernizations, however, it's all about the destination. Picking the right destination or cloud platform for your workload can make all the difference between success and failure in terms of achieving your cloud migration objectives. This session covered the considerations, technologies and approaches that can be used to pick the right cloud targets and achieve the right balance between migration and modernization.
O Power Point foi criado no intuito de apresentar para direção da empresa de forma descontraída e divertida os resultados atingidos pelo setor de marketing na promoção "Momento Relax". Semanalmente eram enviados mini-conteúdos mostrando o crescimento do impacto da campanha de acordo com a duração da campanha. Ao final da promoção, foi enviada a todos da empresa mostrando a penetração que atingida
1.3.8 Система стеклопластиковых лотков G5 CombitechIgor Golovin
G5 Combitech – система стеклопластиковых перфорированных и неперфорированных лотков листового и лестничного типов, а также опорных конструкций для прокладки кабелей в агрессивных средах и экстремальных условиях применения – прибрежные зоны, глубоководные нефтедобывающие платформы, кораблестроение, химическое производство.
Adding Value in the Cloud with Performance TestRodolfo Kohn
System quality attributes such performance, scalability, and availability are among the main concerns for cloud application developers and product managers. There are many examples of notable system failures that show how a company business can be affected during key events like a Cyber Monday. However, many difficulties come up when a team intends to consciously manage these type of quality attributes during development and operations. It is possible to group these difficulties in two main aspects: human aspects and technical aspects. During this presentation, I will share main technical difficulties we had to deal with in the last seven years working with different cloud services as well as key technical performance, scalability, and availability issues we were able to find and solve. It is about cases that are relevant through different products, technologies, and teams.
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
IBM Analytics Accelerator Trends & Directions Namk Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
[EN] Building modern data pipeline with Snowflake + DBT + Airflow.pdfChris Hoyean Song
I'm posting the slide presented at the Snowflake user group meet up.
NFT Bank has introduced DBT to rebuild and operate the entire data pipeline from scratch.
Data quality control and monitoring are critical as data is at the core of the company.
You can manage your numerous data validation tests in organized way. You can add one data validation test with just single line of yaml.
You can build the data catalog and data lineage docs if you just implement your data pipeline on top of DBT without big effort.
---
Session 1: Data Quality & Productivity
Data Quality
Data Quality Validation
Data Catalog, Lineage Documentation
DBT Introduction
Session 2: Integrate DBT with Airflow
DBT Cloud or Airflow?
Astronomer Cosmos
dbt deps
Session 3: Cost Optimization
Query Optimization
Cost Monitoring
iFood on Delivering 100 Million Events a Month to Restaurants with ScyllaScyllaDB
iFood is the largest Brazilian-based food delivery app company. It connects users, restaurants, and deliverymen using an event-driven architecture using AWS SQS and SNS, with programming in Java and Node.js. Thales' team is responsible for delivering orders' events to restaurant devices at least once, which is currently done using a REST API polling and acknowledgment system.
Learn how their database infrastructure evolved from a PostgreSQL database, but began to show limitations and was a single point of failure. Growing through a few intermediary steps, including Amazon DynamoDB, eventually, turning to Scylla for its data model and collections to condense multiple tables. Using Scylla, iFood reduced the time to process events and acknowledgments (from ~80ms to ~3ms) and reduced costs using Scylla vs DynamoDB by over 9x.
A Common Problem:
- My Reports run slow
- Reports take 3 hours to run
- We don’t have enough time to run our reports
- It takes 5 minutes to view the first page!
As the report processing time increases, so the frustration level.
Performance tuning becomes essential when the system indicates sluggish or becomes absolutely unresponsive. Usually, this happens due to increased load with some degree of decreasing performance. Thus, Corporations can save a lot of money using Performance tuning just by modifying a system to handle higher loads and thereby enhance the server performance without spending on new informationrastructure or applications.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
2. 3rd largest Health Insurer in US (Based on revenue)
2012 Revenue: $35.54 Billion
Employees worldwide : 34,000+
Business locations: International (China, Dubai, London)
Membership:
22 million medical members
14.3 million dental members
13.8 million pharmacy member
Health Care Networks : 1 million health care professionals,
5300 hospitals, 597,000 doctors and specialists
About Aetna
3. IBM DB2 Analytics Accelerator Powered by Netezza
Pure Data for Analytics (PDA)
DB2 Accelerator or “the accelerator”
Agenda
Aetna Environment
Results Obtained
Business Value
Technical “Deep Dive”
Quiz
Summary
4. Production Environment: DB8G – 6 member DB2 data
sharing reporting environment
DATA - 400+ tables of various sizes, total about 9TB
1 over a Billion Rows
40% over Half a Billion Rows
40% over 100 Million Rows
10% between 10 and 100 Million Rows
Major reporting applications targeted
Member – MF Warehouse of all enrollment data
Over Payment Tracker (OPT)
Plan Sponsor Reporting
Claim Reporting System
Aetna Environment
5. Ideal Use Case – Long running DB2 reports
Saved $ over application redesign
This was an Infrastructure funded project
There is no chargeback to applications
Aetna Environment – Why “the Accelerator”
6. Aetna DB2 Analytics Accelerator Environment
OLTP Data
- DB2 zOS
ETL
DB8G
Reporting
Warehouse
DB2 zOS
(v10 NFM)
Query Acceleration
DB2 accelerator (v3)
Reporting Tools
Cognos
Business Objects
Webi
Crystal
MS Access
SAS
Tableau
Source
DB3G Data Stage
Load
Incremental
Update
DB2 Queries
7. No application code changes
No tuning*
No indexes
48TB Total Storage
16 TB Dedicated to User Data
4 to 1 Compression Rate
Netezza TwinFin 1000-6* (Aetna slide)
8. What is IBM DB2 AnaIytics Accelerator?
DB2 Software offering that comes with a hardware appliance
Start accelerator with DB2 command
DB2 Detects “heartbeat” of attached appliance
DB2 OPTIMIZER Recognizes Query can be offloaded
zPARM Current Query Acceleration = Enable with Failback
Data needs to be loaded into appliance
Data Studio with a plug-in is user-interface to the accelerator
DB2 Explain modified to determine why query will not off-load
10. Results: LOAD Run Times
LOAD data
5.1GB loaded in 85 seconds
589,324,806 rows in 9.5 minutes (62M
rows/Minute)
1.5 Billion rows in 15 minutes (100 Million
rows/minute)
15 minutes for 18.3 gigabytes of data (33
tables, the largest had > 550,000,000 rows).
12. Accelerator Modeler - APAR PM90535
This APAR provides new function to allow a DB2 subsystem to
model the existence of an accelerator to evaluate the CPU and
elapsed time spent in DB2 for static SQL queries that would be
eligible for acceleration if an accelerator were active. No
accelerator is required or needs to be active for this modeling to
occur.
PM90886 also required
13. What the customers are saying
No we can’t run these queries now, only overnight.
(But when they did) WOW!, The answer came
back so fast, I thought it must have failed
This was a CPU time out after the query ran for 82
wall-clock minutes yesterday.
It ran successfully this morning in 27 sec.
Whatever you paid for this, it was well worth it!
Thank you!!! Hey…what a difference.
I just re-ran a query that fails on DB2 workspace. It just ran successfully in 12 min
I just created a query to go after a report that I am asked for regularly.. I usually
have to build in stages because of the two name fields. The last time I ran reporting
had to parse into 6 queries and schedule the reporting. On average it takes 45
minutes to 1 hour to pull the reporting back - Just ran the same reporting in 17
seconds!
14. More accolades
-We’re noticing much faster response times today. I re-ran a few
reports, without any modifications to compare times. Here are
the results.
06-Feb 12-Feb
301 sec 23 sec
240 sec 13 sec
262 sec 13 sec
322 sec 13 sec
15. And Finally
- Our business partners are very pleased. The
throughput and the ability to meet some previously
unfulfilled needs are being well received.
And just a few side benefits…
- No sort space failures in reporting environment
- MIPS Savings with cost avoidance of $$$
- Process Changes – Member can process on weekend
16. Business Value - OPT
Plan Sponsor/Provider performance guarantees
E.G. time spent w/patient, mean time to medication, readmission rate
Payments to be recovered if performance is not met
Fines involved per regulatory requirements
500+ reports/per quarter for recoveries
Monthly aggregate rollup on overpayment metrics – very slow turnaround
time
Trending Reports – where are overpayments occurring and why
Currently in our reporting environment, inadequate table design and
structure
No full view of providers
Enterprise view of overpayments returned, lag time, duration for collection
Identify Providers not responding
Trend Analysis over time
Root Cause Resolution = $ savings plus better Healthcare
Outcomes
17. Business Value - OPT
Manages work better
Root Cause Team -
Ability to run yearly trending reports of
overpayments
Ability to scale yearly reports down and target
anomalies
Gives ability to size issues quickly
Business people now do requests themselves and do
not need to rely on technical person for assistance
(doesn’t mean IT staff!)
Able to do whatever we want to make informed and
right decisions where we were so handcuffed before
18. There were some bumps along the way
Software offering that comes with hardware – Who Supports?
Require Short Range OSA cards in Data Center
Configure switches for jumbo frame support
Ensure WLM environments are defined correctly
DSNX881I - Critical error messages, you must alert for this
message
SQLSTATE 57011 - Increase NZ_SPRINGFIELD_SIZE to 4096
– concurrency could be reduced (Netezza tuning knob)
Corrupted date field returned from query – Update 5
(PM75749), or DB2 Connect v9.7 fp7 (V2)
SQLCODE-516 currently open - >32K result set from Access
and Crystal possible fix LUW APAR IC86946 – patch supplied
9.7 fp3a runtime client (V2)
Business Objects - two part query predicate runs slow –
UK92607 (V2)
19. There were some bumps along the way
(Below are V3 Netezza)
Reason Code 00D35011 on accelerated query PM90148 10/26
Includes 35 ptf’s
Accelerator stopping for no apparent reason New GUI V3 PTF3
UK96194
Query performance degrading when replication is enabled.
APARFIX PTF 3 prereq
Query statistics being reset PTF4
Accelerated query failing [57011] frequency statistics on the
Netezza-resident objects needed manual update
20. DSNX881I Messages –
Capture and Alert through automation
Email to Data Center and team members
DSNX881I *DB8C 2 E 101 (07-MAY-13, 13:39:41 EDT) NPS SYSTEM NZ82011-H1
- SERVI CE REQUESTED FOR SPU 1188 AT 07-MAY-13, 13:39:41 EDT SYSTEM.
LOCATION:LOGICAL NA ME:'SPA1.SPU9' PHYSICAL LOCATION:'1ST RACK, 1ST
SPA, SPU IN 9TH SLOT' ERROR STRI NG:SPU PHYSICAL INTERFACE ERROR
DSNX881I *DB8H 10 E 50 (28-JAN-13, 11:49:47 EST) NPS SYSTEM NZ82011-H1 -
DISK ERROR ON DISK 1146. SPUHWID:1153 DISK LOCATION:LOGICAL
NAME:'SPA1.DISKENCL4.DISK 9' PHYSICAL LOCATION:'1ST RACK, 4TH
DISKENCLOSURE, DISK IN ROW 3/COLUMN 1' ERRTY
PE:3 ERRCODE:116 OPER:0 DATAPAR
21. What’s Next with the Accelerator at Aetna?
Current V4 beta participant
Current DB2 Loader v1.1 beta participant
Installation of Version 4 – Static SQL support
Consider eliminating DB8G Indexes
Consider eliminating DB8G subsystem members
Evaluate ETL needs
Workload Manager feature
High Performance Storage Saver exploitation
High Availability and DBAR
Performance Monitoring and Reporting
New zBLC Workgroup created for accelerator
25. Hybrid Database System
25
DB2 for z/OS IBM DB2 Analytics Accelerator
(powered by PureData System for Analytics)
High volume, high concurrency,
transactional workload and batch processing
Low volume, low concurrency, complex
queries
• Data shared across all members
• Lock-based concurrency control
• Write-ahead log (WAL)
• Index
• …
• Data partitioned across worker nodes
• Multi-version concurrency control
• Immutable rows (no in-place updates)
• Automatic Zone-Map, auto-reorg
• …
Data
Maintenance
27. DB2 for z/OS Optimizer Decisions
27
System
checks
• zPARM value
• PROFILE values
• QUERY ACCELERATION special register value
Status
checks
• Accelerator available (heartbeat)?
• Accelerator ready to accept queries?
Table
checks
• Referenced tables loaded to accelerator?
• Referenced tables enabled for acceleration?
SQL
checks
• Check for offload limitations (UDF?, XML?)
Heuristic
• Matching index?
• No grouping or aggregation (just i/o?)
• Big result set?
• Size of tables very small?
• Cost threshold
28. Data Maintenance
28
Operation Properties
Full table re-load / Partition re-load • Snapshot of a table
• Can use the RTS change detection feature
• Very efficient: ~0.5 CPU seconds per 100 MB net
changes
• High throughput: up to 1.5 TB/h total, 220 GB/h per
stream, but actual throughput varies
• Redundant reloads if not all rows of a table or partition
have changed. Overhead for the duplicate work
Incremental Update • Applies updates continuously, no snapshots
• Not as efficient as UNLOAD: 31 - 65 CPU seconds per
100 MB net changes
• Throughput: up to 18 GB/h, but actual throughput varies
• Granularity: changes at row level, independent of a
table's partitioning scheme
29. Latency Detection
29
Tables managed by UNLOAD-based refresh
SYSACCEL.SYSACCELERATEDTABLES.REFRESH_TIME
Tables managed by Incremental Update
Stored Procedure SYSPROC.ACCEL_CONTROL_ACCELERATOR
with <getAcceleratorInfo/> command
<replicationInfo state="STARTED"
lastStateChangeSince="2012-11-
11T10:33:42.487678" latencyInSeconds=“38">
All functions are available via Accelerator Studio as well
30. Latency Management
30
Automate snapshot-based table refresh
Table (or partition) is available for queries during refresh
Old version of table is used until refresh is done
Consider using “Change detection” feature (based on DB2 RTS)
May stop offloading if latency is too high
Session scope (QUERY ACCELERATION special register)
Table scope (SET_TABLES_ACCELERATION SP)
Accelerator scope (-STO ACCEL command)
Consider using the <waitForReplication /> command of
SYSPROC.ACCEL_CONTROL_ACCELERATOR
The Stored Procedure returns when all commits that happened
before CALLing the SP have been applied to the Accelerator
31. Quiz Time
Our session today covered the IBM DB2
Analytics Accelerator.
What is another name for the accelerator?
A. International Digital Arts Awards
B. International Diabetic Athletic Association
D. Indiana Dental Assistants Association
E. Interior Designers Association of Australia (est. 1948)
F. Infectious Disease Association of America
G. It Does Amazing Acceleration!
H. None of the above
33. Thank You
Your feedback is important!
• Access the Conference Agenda Builder to
complete your session surveys
o Any web or mobile browser at
http://iod13surveys.com/surveys.html
o Any Agenda Builder kiosk onsite