The document discusses connectivity options for talking with DB2, including DB2 Connectivity drivers, clients, and protocols. It provides an overview of DB2 connectivity drivers for JDBC, ODBC, CLI, and .NET, and how they map to older versions. Guidelines are given for selecting the right driver based on application needs and considerations around footprint, performance, high availability features, and workload balancing. The roles of DB2 Connect and connectivity protocols like DRDA, private protocol, and hipersockets are also summarized.
This document provides an overview and reference information for DB2 Version 9.1 for z/OS, including:
- Details on who should use the guide and how to read syntax diagrams
- Guidelines for planning and designing DB2 applications
- Methods for connecting application programs to DB2
- Techniques for embedding SQL statements in different programming languages
- Approaches for handling SQL errors and checking statement execution
The document discusses a lecture series on relational database technology given by Eberhard Hechler. The series will cover introductions to key concepts of relational database management systems (RDBMS) and DB2. It will provide an overview of DB2, its editions, and exercises. The morning lecture will discuss introductions and objectives, an overview of RDBMS and DB2 architecture, database security, usage of database systems, and types of database systems.
1) DB2 native REST services allow DB2 to expose SQL statements and stored procedures as RESTful APIs. A DB2 stored procedure was created and registered as a REST service.
2) The REST service was then packaged into a SAR file using the z/OS Connect build toolkit. This SAR file was deployed to z/OS Connect.
3) An API was created in z/OS Connect that maps to the REST service. The API was deployed, allowing clients to invoke the underlying DB2 stored procedure via a RESTful call to the z/OS Connect API.
This document summarizes a presentation given at an IBM System z summit. It discusses Payment Solution Providers' (PSP) migration from HP Blade servers running Oracle to IBM System z running z/OS, WebSphere and DB2 for their payment processing division. The key benefits of System z for PSP included reliability for 24/7 processing of millions of transactions daily, scalability to handle spikes in transaction volume, and ability to process thousands of transactions per second from many concurrent users. System z also helped PSP achieve cost reduction goals and faster application development times.
IBM Insight 2013 - Aetna's production experience using IBM DB2 Analytics Acce...Daniel Martin
Aetna uses IBM's DB2 Analytics Accelerator to improve the performance of long-running reports on its DB2 database. The accelerator offloads eligible queries to the Netezza appliance, reducing query times from hours to seconds. Aetna saw a 4x compression rate on its data and was able to load 1.5 billion rows in 15 minutes. Reports that previously timed out after 82 minutes now return results in 27 seconds, improving business users' ability to analyze data.
The document discusses IBM's Z strategy and digital transformation model. It highlights how IBM Z continues to drive the global economy by processing billions of daily transactions. It also outlines IBM's digital transformation model for clients, which includes exposing APIs to enable apps and data, evolving to automate delivery pipelines, optimizing with analytics, and predicting and responding to service interruptions. The model is meant to help clients address digital transformation needs, leverage existing IBM Z assets to accelerate transformation, and achieve business and technical goals.
Cloud computing represents a new era in information technology. It is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing allows companies to get their applications up and running faster, scale and re-scale as needed, and save money by avoiding capital expenses.
IBM is promoting its new IBM zEnterprise system to help retailers address critical needs like improving the customer shopping experience, optimizing supply chains and merchandising, and driving operational efficiencies. The zEnterprise system allows retailers to reduce IT costs through large-scale consolidation, turn information into insights faster through improved performance, and integrate and centralize management across platforms.
This document provides an overview and reference information for DB2 Version 9.1 for z/OS, including:
- Details on who should use the guide and how to read syntax diagrams
- Guidelines for planning and designing DB2 applications
- Methods for connecting application programs to DB2
- Techniques for embedding SQL statements in different programming languages
- Approaches for handling SQL errors and checking statement execution
The document discusses a lecture series on relational database technology given by Eberhard Hechler. The series will cover introductions to key concepts of relational database management systems (RDBMS) and DB2. It will provide an overview of DB2, its editions, and exercises. The morning lecture will discuss introductions and objectives, an overview of RDBMS and DB2 architecture, database security, usage of database systems, and types of database systems.
1) DB2 native REST services allow DB2 to expose SQL statements and stored procedures as RESTful APIs. A DB2 stored procedure was created and registered as a REST service.
2) The REST service was then packaged into a SAR file using the z/OS Connect build toolkit. This SAR file was deployed to z/OS Connect.
3) An API was created in z/OS Connect that maps to the REST service. The API was deployed, allowing clients to invoke the underlying DB2 stored procedure via a RESTful call to the z/OS Connect API.
This document summarizes a presentation given at an IBM System z summit. It discusses Payment Solution Providers' (PSP) migration from HP Blade servers running Oracle to IBM System z running z/OS, WebSphere and DB2 for their payment processing division. The key benefits of System z for PSP included reliability for 24/7 processing of millions of transactions daily, scalability to handle spikes in transaction volume, and ability to process thousands of transactions per second from many concurrent users. System z also helped PSP achieve cost reduction goals and faster application development times.
IBM Insight 2013 - Aetna's production experience using IBM DB2 Analytics Acce...Daniel Martin
Aetna uses IBM's DB2 Analytics Accelerator to improve the performance of long-running reports on its DB2 database. The accelerator offloads eligible queries to the Netezza appliance, reducing query times from hours to seconds. Aetna saw a 4x compression rate on its data and was able to load 1.5 billion rows in 15 minutes. Reports that previously timed out after 82 minutes now return results in 27 seconds, improving business users' ability to analyze data.
The document discusses IBM's Z strategy and digital transformation model. It highlights how IBM Z continues to drive the global economy by processing billions of daily transactions. It also outlines IBM's digital transformation model for clients, which includes exposing APIs to enable apps and data, evolving to automate delivery pipelines, optimizing with analytics, and predicting and responding to service interruptions. The model is meant to help clients address digital transformation needs, leverage existing IBM Z assets to accelerate transformation, and achieve business and technical goals.
Cloud computing represents a new era in information technology. It is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing allows companies to get their applications up and running faster, scale and re-scale as needed, and save money by avoiding capital expenses.
IBM is promoting its new IBM zEnterprise system to help retailers address critical needs like improving the customer shopping experience, optimizing supply chains and merchandising, and driving operational efficiencies. The zEnterprise system allows retailers to reduce IT costs through large-scale consolidation, turn information into insights faster through improved performance, and integrate and centralize management across platforms.
Leveraging the power of SolrCloud and Spark with OpenShiftQAware GmbH
Kubernetes/Cloud-Native-Meetup September 2018, Munich : Talk by Franz Wimmer (@zalintyre, Software Engineer at QAware)
Abstract: One of the most commonly used big data processing frameworks is Apache Spark. Spark manages to process large datasets with parallelization. Solr is a search platform based on Lucene. Solr can be distributed across a cluster using ZooKeeper for configuration management. Both applications can be combined to create performant Big Data applications.
But what if you want to scale up horizonally and add a node? In a manual setup, you'd have to install the new node manually. Cluster orchestrators like OpenShift claim to solve this problem.
This talk shows how to put Spark, Solr and ZooKeeper into containers, which can then be scaled individually inside a cluster using OpenShift. We will cover OpenShift details like DeploymentConfigs, StatefulSets, Services, Routes and Persistent Volumes and install a complete, failsafe and horizontally scaleable SolrCloud / Spark / Zookeeper cluster in seconds.
You will also learn about the drawbacks and pitfalls of running Big Data applications inside an OpenShift cluster.
Syllabus of Streaming Courses in mainframe assembler and z/OS internals for everyone who interested to become a real systems programmer or system-level software developer for IBM mainframe platform, especially in z/OS system environment.
Native Stored Procedures with data studioJørn Thyssen
The document discusses how to write and debug native stored procedures in IBM Data Studio. It outlines five scenarios: 1) Creating a stored procedure from a template, 2) Running a stored procedure for testing, 3) Creating a new version of a stored procedure, 4) Debugging a stored procedure, and 5) Tuning SQL statements from stored procedures. It also mentions monitoring stored procedure performance using Query Monitor and OMPE.
The NRB Group mainframe day 2021 - IBM Z-Strategy & Roadmap - Adam John Sturg...NRB
This presentation is about the IBM Z Software Strategy. Key points of IBM's strategy for the platform, including Hardware and Software with a quick view on future roadmaps.
IBM Enterprise 2014 - System z Technical University - Preliminary Agenda Casey Lucas
This document provides a preliminary agenda for the Enterprise2014 conference taking place October 6-10 at The Venetian in Las Vegas. It outlines the schedule of sessions each day, organized by topic tracks. Attendees can access additional information by logging into the attendee portal at ibmtechu.com after September 15, where they can customize their schedule, view presentations, and provide feedback. The agenda is subject to change and the most up-to-date information will be available through the online portal.
This document provides an overview of benchmarking market pricing for outsourced IT services. It discusses the importance of benchmarking to determine appropriate pricing and ensure the best value. The document outlines the two main types of benchmarking: cost benchmarking, which analyzes internal costs for efficiency opportunities, and price benchmarking, which analyzes outsourced service fees. It also provides guidance on when benchmarking clauses in contracts are most appropriate based on factors like contract length and commonality of services. Finally, it emphasizes the importance of properly scoping benchmarking based on the specific services included.
Mainframe Optimization with Modern SystemsModern Systems
Our Mainframe Optimization services are for customers that want to keep mainframe apps- and extend their capabilities to support new business requirements.
Liberate Nonrelational Data Without A Migration - Mainframe DataShare
Operational and transactional data to support big data architecture and reporting is often trapped in nonrelational databases that don’t integrate with modern data warehouses. Empower true business intelligence by integrating nonrelational databases still in the legacy environment with relational data warehouses- without disturbing IT or end users.
Reduce MIPS Costs Up To 40% Without Impacting End Users - Batch Off The Mainframe
Mainframe optimization through offloading workloads to reduce MIPS can be transparent to end users when planned and executed with adequate computing resources. Batch Off Mainframe service leverages off-mainframe processing power to reduce mainframe MIPS and overall cost-of-ownership.
Extend Business Value of COBOL Applications - Mainframe Field Expansion
Over time, data standards have been implemented and mixed across applications and databases- making change of any kind risky. Our solution reaches across the all lines of code that comprise your applications, both online and batch- and applies controlled, standardized change.
The NRB Group mainframe day 2021 - Containerisation on Z - Paul Pilotto - Seb...NRB
Containerization on IBM Z : the notion of containers, their principles, how it works, their benefits on IBM Z and the reasons to adopt containers.
The second part of the presentation focuses on the various solutions available on IBM Z to run and execute your containers at the best place, on IBM Z !
Maintec Technologies is an IT services company established in 1998 that provides IBM training, data center management services, and application development/maintenance on IBM platforms like mainframe, AS/400, and AIX. It has delivery centers in Bangalore, India and offices in the US, UK, and India. Maintec focuses on providing remote data center management and virtual staffing services, and sees opportunities in addressing the scarcity of skilled mainframe professionals through training programs and data center management outsourcing.
What does the road ahead look like for the Micro Focus COBOL products? Let’s take a closer look at the product strategy and vision over the twelve to eighteen months. Let’s examine the key audience messages and benefits for these products, including their planned roadmap themes and deliverables for the coming year. Whether you’re using RM, ACU, Net Express, Server Express, or moving to the Visual COBOL product, you won’t want to miss this session. Understand the current and future product plans and product roadmaps for these COBOL technologies.
Nrb Mainframe Day - Nrb Mainframe Strategy - Pascal LaffineurNRB
NRB is the Belgian leading mainframe services provider, with a capacity of more than 24.000 MIPS operated from its two tier 3+ data centres, a mainframe development team of more than 200 collaborators and specialists consultants accompanying its customers through their mainframe modernisation process. Pascal Laffineur, CEO of The NRB Group, presents the company’s mainframe strategy showing constant investments and a strong believe in the current and future potential of the Mainframe.
How to combine Db2 on Z, IBM Db2 Analytics Accelerator and IBM Machine Learni...Gustav Lundström
This document provides an overview and demonstration of combining Db2 on Z, IBM Db2 Analytics Accelerator, and IBM Machine Learning on z/OS for credit scoring applications. It discusses machine learning basics and the machine learning workflow. It then reviews how the Db2 Analytics Accelerator can be used for in-database analytics and machine learning. Finally, it demonstrates IBM Machine Learning for z/OS, including model creation, management, deployment, and continuous performance monitoring capabilities. A live demonstration of a credit scoring application that leverages these technologies is also provided.
The NRB Group mainframe day 2021 - New Programming Languages on Z - Frank Van...NRB
In this presentation, you will be able to understand the technology and use of modern languages on IBM Z and how it can help you create the easiest platform to work with in a Hybrid Multi Cloud environment.
NRB présents its Customercase at the VMWARE VMware vForum 2019! The must-attend event for IT professionals who want to learn more about operating a successful digital transformation and building new opportunities and connections.
Mainframe Fine Tuning - Fabio Massimo OttavianiNRB
Mainframe cost is heavily dependent on the real CPU load through the IBM mechanism for charging Software by the 4 hours rolling average. By precisely monitoring various loads (as detecting rapidly abnormal CPU peaks, optimizing Disk I/O and using new features as Large pages), EPV provides a toolset to reduce the CPU load (and hence the IBM Software charging) while better using it.
This is the planning part of a two-part session for system programmers and their managers who are planning on upgrading to z/OS V2.5. In part one, the focus is on preparing your current system for upgrading to either release. The system requirements to run and how to prepare your system for the upgrade are discussed. Part two covers the only upgrade details for upgrading to z/OS V2.5 from either V2.3 or V2.4. It is strongly recommended that you attend both sessions for an upgrade picture for z/OS V2.5.
The general availability date for z/OS V2.5 was for September 30, 2021.
Munich 2016 - Z011599 Martin Packer - More Fun With DDFMartin Packer
This document summarizes a presentation about analyzing DDF workloads using performance data. The presentation describes how to classify "alien" DB2 work coming through DDF and determine what is issuing the requests. It provides examples analyzing the behavior of different DDF clients, including identifying a CPU spike from one client and determining if another client is exhibiting "sloshing" behavior. The key lessons are that DDF management requires using WLM and application examination/tuning, and SMF 101 accounting trace records are important for instrumentation.
Cristian Molaro is a DB2 consultant in Belgium who specializes in DB2 for z/OS administration, performance monitoring, and tuning. He has experience speaking at IDUG and GSE conferences and holds an IBM Certified Professional certification along with degrees in chemical engineering and management sciences. The presentation discusses various DB2 for z/OS topics including migrations, performance tools, and critical maintenance.
This document discusses Nicolas' background and experiences in coffee, research, education, and work at Google, Semetis, and Additionly. It then covers 6 trends in internet including the cloud, mobile, social networks, business, internet of things, and big data. Finally, it discusses entrepreneurship and the company Additionly which provides a web-based platform for monitoring and reporting on digital marketing performance.
Leveraging the power of SolrCloud and Spark with OpenShiftQAware GmbH
Kubernetes/Cloud-Native-Meetup September 2018, Munich : Talk by Franz Wimmer (@zalintyre, Software Engineer at QAware)
Abstract: One of the most commonly used big data processing frameworks is Apache Spark. Spark manages to process large datasets with parallelization. Solr is a search platform based on Lucene. Solr can be distributed across a cluster using ZooKeeper for configuration management. Both applications can be combined to create performant Big Data applications.
But what if you want to scale up horizonally and add a node? In a manual setup, you'd have to install the new node manually. Cluster orchestrators like OpenShift claim to solve this problem.
This talk shows how to put Spark, Solr and ZooKeeper into containers, which can then be scaled individually inside a cluster using OpenShift. We will cover OpenShift details like DeploymentConfigs, StatefulSets, Services, Routes and Persistent Volumes and install a complete, failsafe and horizontally scaleable SolrCloud / Spark / Zookeeper cluster in seconds.
You will also learn about the drawbacks and pitfalls of running Big Data applications inside an OpenShift cluster.
Syllabus of Streaming Courses in mainframe assembler and z/OS internals for everyone who interested to become a real systems programmer or system-level software developer for IBM mainframe platform, especially in z/OS system environment.
Native Stored Procedures with data studioJørn Thyssen
The document discusses how to write and debug native stored procedures in IBM Data Studio. It outlines five scenarios: 1) Creating a stored procedure from a template, 2) Running a stored procedure for testing, 3) Creating a new version of a stored procedure, 4) Debugging a stored procedure, and 5) Tuning SQL statements from stored procedures. It also mentions monitoring stored procedure performance using Query Monitor and OMPE.
The NRB Group mainframe day 2021 - IBM Z-Strategy & Roadmap - Adam John Sturg...NRB
This presentation is about the IBM Z Software Strategy. Key points of IBM's strategy for the platform, including Hardware and Software with a quick view on future roadmaps.
IBM Enterprise 2014 - System z Technical University - Preliminary Agenda Casey Lucas
This document provides a preliminary agenda for the Enterprise2014 conference taking place October 6-10 at The Venetian in Las Vegas. It outlines the schedule of sessions each day, organized by topic tracks. Attendees can access additional information by logging into the attendee portal at ibmtechu.com after September 15, where they can customize their schedule, view presentations, and provide feedback. The agenda is subject to change and the most up-to-date information will be available through the online portal.
This document provides an overview of benchmarking market pricing for outsourced IT services. It discusses the importance of benchmarking to determine appropriate pricing and ensure the best value. The document outlines the two main types of benchmarking: cost benchmarking, which analyzes internal costs for efficiency opportunities, and price benchmarking, which analyzes outsourced service fees. It also provides guidance on when benchmarking clauses in contracts are most appropriate based on factors like contract length and commonality of services. Finally, it emphasizes the importance of properly scoping benchmarking based on the specific services included.
Mainframe Optimization with Modern SystemsModern Systems
Our Mainframe Optimization services are for customers that want to keep mainframe apps- and extend their capabilities to support new business requirements.
Liberate Nonrelational Data Without A Migration - Mainframe DataShare
Operational and transactional data to support big data architecture and reporting is often trapped in nonrelational databases that don’t integrate with modern data warehouses. Empower true business intelligence by integrating nonrelational databases still in the legacy environment with relational data warehouses- without disturbing IT or end users.
Reduce MIPS Costs Up To 40% Without Impacting End Users - Batch Off The Mainframe
Mainframe optimization through offloading workloads to reduce MIPS can be transparent to end users when planned and executed with adequate computing resources. Batch Off Mainframe service leverages off-mainframe processing power to reduce mainframe MIPS and overall cost-of-ownership.
Extend Business Value of COBOL Applications - Mainframe Field Expansion
Over time, data standards have been implemented and mixed across applications and databases- making change of any kind risky. Our solution reaches across the all lines of code that comprise your applications, both online and batch- and applies controlled, standardized change.
The NRB Group mainframe day 2021 - Containerisation on Z - Paul Pilotto - Seb...NRB
Containerization on IBM Z : the notion of containers, their principles, how it works, their benefits on IBM Z and the reasons to adopt containers.
The second part of the presentation focuses on the various solutions available on IBM Z to run and execute your containers at the best place, on IBM Z !
Maintec Technologies is an IT services company established in 1998 that provides IBM training, data center management services, and application development/maintenance on IBM platforms like mainframe, AS/400, and AIX. It has delivery centers in Bangalore, India and offices in the US, UK, and India. Maintec focuses on providing remote data center management and virtual staffing services, and sees opportunities in addressing the scarcity of skilled mainframe professionals through training programs and data center management outsourcing.
What does the road ahead look like for the Micro Focus COBOL products? Let’s take a closer look at the product strategy and vision over the twelve to eighteen months. Let’s examine the key audience messages and benefits for these products, including their planned roadmap themes and deliverables for the coming year. Whether you’re using RM, ACU, Net Express, Server Express, or moving to the Visual COBOL product, you won’t want to miss this session. Understand the current and future product plans and product roadmaps for these COBOL technologies.
Nrb Mainframe Day - Nrb Mainframe Strategy - Pascal LaffineurNRB
NRB is the Belgian leading mainframe services provider, with a capacity of more than 24.000 MIPS operated from its two tier 3+ data centres, a mainframe development team of more than 200 collaborators and specialists consultants accompanying its customers through their mainframe modernisation process. Pascal Laffineur, CEO of The NRB Group, presents the company’s mainframe strategy showing constant investments and a strong believe in the current and future potential of the Mainframe.
How to combine Db2 on Z, IBM Db2 Analytics Accelerator and IBM Machine Learni...Gustav Lundström
This document provides an overview and demonstration of combining Db2 on Z, IBM Db2 Analytics Accelerator, and IBM Machine Learning on z/OS for credit scoring applications. It discusses machine learning basics and the machine learning workflow. It then reviews how the Db2 Analytics Accelerator can be used for in-database analytics and machine learning. Finally, it demonstrates IBM Machine Learning for z/OS, including model creation, management, deployment, and continuous performance monitoring capabilities. A live demonstration of a credit scoring application that leverages these technologies is also provided.
The NRB Group mainframe day 2021 - New Programming Languages on Z - Frank Van...NRB
In this presentation, you will be able to understand the technology and use of modern languages on IBM Z and how it can help you create the easiest platform to work with in a Hybrid Multi Cloud environment.
NRB présents its Customercase at the VMWARE VMware vForum 2019! The must-attend event for IT professionals who want to learn more about operating a successful digital transformation and building new opportunities and connections.
Mainframe Fine Tuning - Fabio Massimo OttavianiNRB
Mainframe cost is heavily dependent on the real CPU load through the IBM mechanism for charging Software by the 4 hours rolling average. By precisely monitoring various loads (as detecting rapidly abnormal CPU peaks, optimizing Disk I/O and using new features as Large pages), EPV provides a toolset to reduce the CPU load (and hence the IBM Software charging) while better using it.
This is the planning part of a two-part session for system programmers and their managers who are planning on upgrading to z/OS V2.5. In part one, the focus is on preparing your current system for upgrading to either release. The system requirements to run and how to prepare your system for the upgrade are discussed. Part two covers the only upgrade details for upgrading to z/OS V2.5 from either V2.3 or V2.4. It is strongly recommended that you attend both sessions for an upgrade picture for z/OS V2.5.
The general availability date for z/OS V2.5 was for September 30, 2021.
Munich 2016 - Z011599 Martin Packer - More Fun With DDFMartin Packer
This document summarizes a presentation about analyzing DDF workloads using performance data. The presentation describes how to classify "alien" DB2 work coming through DDF and determine what is issuing the requests. It provides examples analyzing the behavior of different DDF clients, including identifying a CPU spike from one client and determining if another client is exhibiting "sloshing" behavior. The key lessons are that DDF management requires using WLM and application examination/tuning, and SMF 101 accounting trace records are important for instrumentation.
Cristian Molaro is a DB2 consultant in Belgium who specializes in DB2 for z/OS administration, performance monitoring, and tuning. He has experience speaking at IDUG and GSE conferences and holds an IBM Certified Professional certification along with degrees in chemical engineering and management sciences. The presentation discusses various DB2 for z/OS topics including migrations, performance tools, and critical maintenance.
This document discusses Nicolas' background and experiences in coffee, research, education, and work at Google, Semetis, and Additionly. It then covers 6 trends in internet including the cloud, mobile, social networks, business, internet of things, and big data. Finally, it discusses entrepreneurship and the company Additionly which provides a web-based platform for monitoring and reporting on digital marketing performance.
The document discusses different types of motivation - controlled motivation which involves feeling forced or obligated to act, and autonomous motivation which involves acting voluntarily and from free will. It provides examples of characters Christophe, Nathalie, Céline, and Arthur and whether they are motivated in a controlled or autonomous way to support a fundraising event based on their reasons for participating.
Business intelligence with web data gabc maySemetis
The document discusses how the quantity of digital data being created and stored is exploding, making it easier than ever to access timely and flexible data from a variety of sources. It describes how capturing, monitoring, and measuring data can provide valuable business intelligence insights and help understand customers, industries, and performance. The rise of open APIs and data initiatives have increased opportunities to build applications that gain insights from web data and track business objectives.
This document provides an introduction to Java syntax. It discusses key Java concepts like classes, objects, variables, comments, arrays, and control structures. It explains that classes are blueprints for objects, and objects are instances of classes. It defines variables and lists Java's basic data types. It also demonstrates how to write single-line and multi-line comments. The document shows syntax for declaring and initializing arrays, and provides examples of if/else, switch, while, and for control structures in Java.
Google Belgium Research: How radio & TV impact online brand popularity?Semetis
The document discusses measuring the impact of offline advertising on online search volume. It presents findings from analyzing brand search trends after TV and radio campaigns. The key findings are that offline ads can significantly increase search volumes, with TV ads showing a 41% average boost and radio ads a 33% boost. It also identifies three main factors for an effective ad boost: 1) give people a reason to search online with a clear call to action, 2) emphasize the website URL, and 3) keep ad messages short to maintain focus. The document suggests analyzing your own brand's ad boost data and modeling campaign effects, and ensuring your website and search ads are optimized to capture online interest generated offline.
DB2 pureScale is a new DB2 feature that allows a DB2 database to span multiple database servers for increased availability, scalability and flexible capacity. It uses a shared disk architecture with Global Parallel File System technology to provide a single database image across nodes. Key components include Cluster Services, InfiniBand networking, global bufferpool and lock manager to coordinate data access and concurrency across nodes. The technology is still in development with initial support for AIX on Power hardware.
The Data Opportunity - Rock your data with Segment.comSemetis
Keynote on the marketing approach of mention.com, one of France's leading start-ups. By using Segment.com, a single API, they collect advanced data which feeds a series of automated marketing tools.
Google Display Targeting Methods, a Semetis approachSemetis
Semetis is a Search & Web Analytics agency who likes to share best practices approaches. You will find here a detailed presentation on Google Display Network targeting methods compiled with thanks to Semetis expertise and know-how.
2. Google Analytics New Interface - Search University 3Semetis
Google Analytics Today: Timo Josten is the European Google Analytics partners responsible. He will show all new Google Analytics interfaces and explain the rational for new dashboards, tips and tricks about measuring metrics in all online advertising campaigns including Search Marketing. He will also talk about interesting new beta's.
Libra : A Compatible Method for Defending Against Arbitrary Memory OverwriteJeremy Haung
http://adl.tw/~jeremy/slides/presentation2.pptx
Attached detailed Analysis of CVE-2013-2094 (&on x86-32).
Exploit the CVE-2013-2094 with animation
There have been more vulnerabilities in the Linux Kernel in 2013 than there had been in the previous decade. In this paper, the research was focused on defending against arbitrary memory overwrites in Privilege Escalation.
To avoid malicious users getting root authority. The easiest way is to set the sensitive data structure to read-only. But we are not sure the sensitive data structure will never be modified by legal behavior from a normal device driver; thus, we posed a compatible solution between read-only solutions and writable solutions to enhance compatibility.
The main idea that we posed not only solves the above problem, but also the general problem which is ensuring that important memory values can only be changed within a safe range.
It is not just set to read-only.
Key Word : Linux Kernel Vulnerabilities、exploit、Privilege Escalation
This document discusses Java Database Connectivity (JDBC) which provides a standard interface for connecting Java applications to various databases. It describes the JDBC API and architecture, including the four types of JDBC drivers. The key points are:
1) JDBC provides a standard way for Java programs to access any SQL database. It uses JDBC drivers implemented by database vendors to translate JDBC calls into database-specific protocols.
2) The JDBC API has two layers - an application layer used by developers, and a driver layer implemented by vendors. There are four main interfaces (Driver, Connection, Statement, ResultSet) and the DriverManager class.
3) There are
This document discusses Java Database Connectivity (JDBC) which provides a standard interface for connecting Java applications to various databases. It describes the JDBC API and architecture, including the four types of JDBC drivers. The key points are:
1) JDBC provides a standard way for Java programs to access any SQL database. It uses JDBC drivers implemented by database vendors to translate JDBC calls into database-specific protocols.
2) The JDBC API has two layers - an application layer used by developers, and a driver layer implemented by vendors. There are four main interfaces (Driver, Connection, Statement, ResultSet) and the DriverManager class.
3) There are
JDBC is a Java API that allows Java programs to execute SQL statements and access databases. There are 4 types of JDBC drivers: Type 1 uses JDBC-ODBC bridge, Type 2 uses native database APIs, Type 3 uses middleware, and Type 4 communicates directly with database using vendor-specific protocols. The basic JDBC process involves loading the driver, connecting to the database, creating statements to execute queries, processing result sets, and closing the connection.
This document provides an overview of IBM DB2 9, including:
- The various editions of DB2 9 for different use cases and hardware configurations
- The common code shared across operating system platforms
- Additional products and features including add-ons, clients, extenders, and connectivity tools
- Descriptions of the main administration and development tools provided with DB2 9
This document discusses Java Database Connectivity (JDBC) and its components. It begins with an introduction to JDBC, explaining that JDBC is a Java API that allows Java programs to execute SQL statements and interact with multiple database sources. It then discusses the four types of JDBC drivers - JDBC-ODBC bridge drivers, native-API partly Java drivers, network protocol all-Java drivers, and native protocol all-Java drivers - and their characteristics. The document proceeds to explain the standard seven steps to querying databases using JDBC: loading the driver, defining the connection URL, establishing the connection, creating a statement object, executing a query or update, processing results, and closing the connection.
The document discusses Java Database Connectivity (JDBC). It describes JDBC as a Java API that allows Java programs to execute SQL statements. It provides methods for querying and updating data within a database. The document outlines the different components and specifications of JDBC, including the JDBC driver manager, JDBC drivers, and JDBC APIs. It also discusses the different types of JDBC drivers and their architectures.
This document discusses Java Database Connectivity (JDBC) and provides details about its architecture and usage. It defines JDBC as an API that allows Java programs to connect to databases and execute SQL statements. The key points covered include:
- The 4 types of JDBC drivers and their differences
- The steps to connect to a database using JDBC, which are defining the connection URL, establishing the connection, creating statements, executing queries, processing results, and closing the connection
- The 3 types of statements in JDBC - Statement, PreparedStatement, and CallableStatement and their usages
Slides of my Perl 6 DBDI (database interface) talk at YAPC::EU in August 2010. Please also see the fun screencast that includes a live demo of perl6 using a perl5 DBI driver: http://timbunce.blip.tv/file/3973550/
The document discusses Java Database Connectivity (JDBC) and provides details on connecting to a database from a Java program. It covers:
1. What JDBC is and its architecture, including key interfaces like Connection, Statement, and ResultSet.
2. The steps to connect to a database using JDBC: loading the driver, defining the connection URL, establishing a connection, creating a Statement, executing queries, processing results, and closing the connection.
3. The different types of JDBC drivers and Statements that can be used.
The document provides an overview of JDBC (Java Database Connectivity), which is a standard Java API for connecting to databases. It discusses the history and evolution of JDBC, the JDBC model, driver types, and the typical programming steps for using JDBC, which include loading a driver, connecting to a database, executing SQL statements, processing results, and closing the connection. It also describes key JDBC classes like Connection, Statement, and ResultSet. The document uses examples with Derby to demonstrate creating a database and table, adding the Derby driver, and setting up a JDBC connection to insert and retrieve data.
The document discusses Java Database Connectivity (JDBC), which provides a standard interface for connecting to relational databases from Java applications. It describes the JDBC model and programming steps, which include loading a JDBC driver, connecting to a database, executing SQL statements via a Statement object, processing query results stored in a ResultSet, and closing connections. It also covers JDBC driver types, the roles of core classes like Connection and Statement, and transaction handling with JDBC.
This document provides an overview and summary of the book "Java Database Programming with JDBC" by Pratik Patel. The summary includes:
1) An introduction to JDBC (Java Database Connectivity), which is an API that allows Java programs to connect to and interact with databases.
2) An overview of the structure of JDBC, which separates low-level driver programming from a high-level application interface. Vendors supply JDBC drivers to connect to different databases.
3) A list of database vendors that have endorsed the JDBC specification.
JDBC provides a standard interface for connecting to and working with databases in Java applications. There are four main types of JDBC drivers: Type 1 drivers use ODBC to connect to databases but are only compatible with Windows. Type 2 drivers use native database client libraries but require the libraries to be installed. Type 3 drivers use a middleware layer to support multiple database types without native libraries. Type 4 drivers connect directly to databases using a pure Java implementation, providing cross-platform compatibility without additional layers.
Mumbai Academics is Mumbai’s first dedicated Professional Training Center for Training with Spoke and hub model with Multiple verticles . The strong foundation of Mumbai Academics is laid by highly skilled and trained Professionals, carrying mission to provide industry level input to the freshers and highly skilled and trained Software Professionals/other professional to IT companies.
This document discusses JDBC architecture and driver types. It introduces JDBC as an API that allows Java applications to connect to databases. The JDBC architecture involves using driver classes like DriverManager and Connection to communicate with a database through a specific driver. There are four types of JDBC drivers: type 1 uses JDBC-ODBC bridge, type 2 uses native database APIs, type 3 uses a middleware, and type 4 is a pure Java driver that connects directly to the database.
java database connectivity for java programmingrinky1234
- JDBC (Java Database Connectivity) is a Java API that allows Java programs to connect and execute queries with various databases. It uses JDBC drivers to connect to different database types.
- There are four main types of JDBC drivers: JDBC-ODBC bridge driver, native-API driver, network protocol driver, and thin driver. The thin driver provides the best performance as no additional software is required on the client or server side.
- To connect to a database using JDBC, a program loads the appropriate driver, establishes a connection, creates statements to execute queries, processes result sets, and closes the connection. The example shows how to connect to an Oracle database using JDB
Ibm db2 10.5 for linux, unix, and windows installing ibm data server clientsbupbechanhgmail
The document provides instructions for installing the IBM Data Server Driver Package on Windows and Linux/UNIX systems. It discusses the driver package's requirements, how to install it using commands or a graphical interface, and how to configure and test connections to databases. The driver package provides runtime support for applications using technologies like ODBC, CLI, .NET and allows connectivity to DB2 databases on IBM mainframe and midrange systems.
The document discusses IBM databases and how they can be used with Ruby on Rails projects. It provides an overview of IBM's portfolio of data servers including DB2, which supports a wide range of platforms. It then discusses how DB2 Everyplace can be used for mobile and embedded applications, and provides an example of how Hyundai Motor Company used it. Finally, it summarizes benefits of using DB2, including performance leadership, ease of use, lower costs through features like compression, and support for new workloads.
JDBC java database connectivity with dbmsKhyalNayak
JDBC provides a standard interface for connecting to and interacting with databases in Java applications. There are four types of JDBC drivers: 1) Type 1 drivers use JDBC-ODBC bridges but are platform dependent. 2) Type 2 drivers convert JDBC calls to native database calls and require client-side libraries. 3) Type 3 drivers use a middleware layer and allow connection to multiple databases from a single driver. 4) Type 4 drivers directly convert JDBC calls to database protocols and are 100% pure Java but require a separate driver for each database.
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
4. DRDA Connectivity options
IBM Data
Server
Runtime IBM Data
IBM Data Client Server Driver
App Server Client for JDBC and
App SQLJ
App IBM Data
DB2 Connect Server Driver
for ODBC
and CLI
IBM Data
DB2 DB2 for Server Driver
requester z/OS Package
≥ 9.5 Fp3
3
5. Table of equivalences
V8 V9 V9.5 & V9.7
DB2 Administration Client DB2 Client IBM Data Server Client
DB2 Application
Development Client
DB2 Runtime Client DB2 Runtime Client IBM Data Server Runtime
Client
Java Common Client IBM DB2 Driver for JDBC IBM Data Server Driver
and SQLJ for JDBC and SQLJ
IBM DB2 Driver for IBM Data Server Driver
ODBC and CLI for ODBC and CLI
IBM Data Server Driver
Package
This presentation uses V9.7 terminology
4
6. IBM Data Server Drivers and Clients selection guide
Smallest JDBC ODBC OLE DB Open CLP GUI tools
footprint and and CLI and .NET source
SQLJ
IBM Data Server Driver
for JDBC and SQLJ X X
IBM Data Server Driver
for ODBC and CLI X X
IBM Data Server Driver
Package X X X X
IBM Data Server
Runtime Client X X X X X
IBM Data Server Client
X X X X X X
There is a functional overlap
Should balance functionality with footprint
DB2 Connect Server not required for Sysplex Workload
Balancing (≥ 9.5 Fp3)
DB2 Connect license still required
5
7. Selection guidelines: application view
Smallest Application Performance Sysplex Seamless
footprint WLB failover +
ACR (DS)
For Java-based
For dynamic
Type 4 drive X dynamic SQL
SQL only X X
applications
IBM Data Server Supports both
For Java-based static
Driver for JDBC and
SQL applications
static and X X
SQLJ dynamic SQL
Easiest to code. Supports both
pureQuery using Type Recommended for static and
4 driver new Java based static dynamic SQL X X
SQL applications
Data Server drivers in
For C/C++ For dynamic
ODBC/CLI X applications SQL only X X
environments
For C# and For dynamic
Data Server drivers
in .NET environment X VisualBasic SQL only X X
applications
Set db2.jcc.sqljUncustomizedWarningorException to 1 or 2
6
8. Traces available on distributed components
Client / Driver Available What the trace contains?
traces
IBM Data Server Driver for JCC Trace It contains both JCC driver trace and DRDA
JDBC and SQLJ (type 4) trace. JCC trace contains both JCC driver
trace and DRDA trace only when TRACE_ALL
is specified
IBM Data Server Driver for CLI trace, CLI trace contains the driver trace. db2trc
ODBC and CLI db2trc, contains db2 client side buffers and DRDA
db2drdat buffers. (db2drdat available from 9.5 FixPack
4)
All other Data Server CLI trace, CLI trace + db2trc + db2drdat.
Clients, DB2 Connect, DB2 db2trc, db2drdat contains only DRDA buffers.
ESE and so forth db2drdat
It is a good idea to get used to collect and analyze traces in
distributed components
7
9. The choice of the right configuration
Only Java clients were able to exploit Sysplex Workload
Balancing functions via direct connections.
JDBC,SQLJ, IBM Data Server Driver for JDBC and SQLJ
pureQuery DRDA
ODBC, CLI, .NET,
OpenSource DB2 Connect
This functionality has been extended to all clients.
JDBC,SQLJ, IBM Data Server Driver package
pureQuery DRDA
ODBC, CLI, .NET,
OpenSource
8
10. The choice of the right configuration
Most configurations currently using DB2 Connect can use one
of the IBM Data Server products:
– Significantly reduced footprint
– Simplify infrastructure from 3 tiers to 2 tiers
– Reduced network traffic and code path
– Simplification of single point of failure management
– Simplification of problem determination
But:
– More complex software administration for maintenance
– A license for DB2 Connect is still required
– No gateway functionality
– WLB balancing scope reduced to local applications
9
11. Replacing DB2 Connect by Clients: Considerations
Cons Pros
Improved performance
Reduction in control of
by reduced network
workload priorities
traffic and code path
Potential impact to high Improved availability:
priority distributed or elimination of a point
mainframe applications of failure
DB2 Connect Server
Improved problem
required for XA using
multi-transport model determination
Client-side configuration management tools coming soon
Most XA-compliant TMs such as WAS use single-transport model
10
12. Some DB2 Connect reserved functionalities
Remember: there is no mechanism available to DDF or WLM
to classify a workload BEFORE connection: critical and low
priority workloads compete for DBATS
DB2 Connect:
– Provides gateway, connection concentration and a larger scope
for WLB and Pooling
– Simplification of upgrades and maintenance
DB2
Connect DB2 DB2
11
13. DB2 Connect and Hipersockets
Probably the best option for
Linux on z z/OS
a DB2 Connect server
DB2
Get availability advantages Connect DB2
of System z at IFL price
Hipersockets
Hipersockets support
Promotes server z/OS
consolidation: reduces Data
Center costs DB2
But: WLB and Sysplex
Distributor doesn’t consider
z/OS
hipersockets for workload
distribution (yet) DB2
12
14. DB2 Private Protocol
DB2 to DB2 PP still supported but officially deprecated in V9
Anyway:
– No changes since V5 (10 years)
– Not zIIP eligible: CICS transactions are if TCP/IP + DRDA
– Lots of functions only available trough DRDA, like Static SQL,
thread pooling and Stored procedures
– Today there is no technical reason to still keep using PP
DBPROTCL zParm removed from V9
– DBPROTOCOL(DRDA) assumed for any BIND/REBIND if not
specified DBPROTOCOL
– You may need to change existing BIND/REBIND processes
– If specified, DSNT226I and warning RC (4):
OPTION IS NOT RECOMMENDED WHEN BINDING PLANS OR PACKAGES
13
15. DB2 PP would NOT WORK on V?!
Migration shouldn’t require application changes
It has an impact on existing BIND/REBIND processes
Creation of ALIASES is required
Use the PP to DRDA Catalog Analysis Tool DSNTP2DP
14
16. The private to DRDA protocol REXX tool
Creates local and remote BIND commands for PLANS and
PACKAGES and build CREATE ALIAS statements
You need to change current BIND/REBIND to include remote
Uses catalog information to determine applications having a
remote location dependency: embedded dynamic SQL will
usually NOT be indentified
For DB2 V9: highly recommended APAR PK78553
Support for DB2 V7 and V8: APAR PK40433
Shipped with V9, for older versions: http://www.ibm.com/
developerworks/exchange/dw_entryView.jspa?
externalID=213&categoryID=32
More information: DB2 for z/OS Installation Guide
15
18. Application Programming Best Practices
Limit the size of your result set:
– Use the WHERE, GROUP BY, and HAVING clauses
– setFetchSize() can be a hint to Java driver for scrollable rowset
cursors
Help the server use limited block fetch (and extra blocks)
– Use OPTIMIZE for n ROWS and FETCH FIRST n ROWS
– Declare your cursor with FOR FETCH ONLY, or FOR READ
ONLY, and INSENSITIVE STATIC
– Use CURRENTDATA(NO) and ISOLATION(CS) when possible
and avoid ISOLATION(RR)
– Avoid using WITH HOLD cursor: CLI applications
use with hold cursor by default
19. Application Programming Best Practices
Use Remote Stored procedures to minimize network traffic.
– Native SQL procedures called via DRDA TCP/IP clients are zIIP-
eligible (V9)
– Use result set cursors to return data
– Use COMMIT on RETURN clause for stored procedures that do
not return result sets
Explicitly close your cursors after you have fetched all data
COMMIT often but avoid the use auto-commit
Use KEEPDYNAMIC(YES) where necessary to avoid
excessive prepares but remember that it prevents the
connection from being inactivated
20. Application Programming Best Practices
Data Server Drivers use Dynamic Data Format
(Progressive Streaming) by default for LOBs
and XML data.
Java drivers use multi-row fetch by default for
scrollable cursors. CLI driver uses
DB2BulkOperations.
Data Server Drivers support both atomic and
non-atomic multi-row insert (addBatch (Java),
array input chaining (CLI) and DB2BulkCopy
(.NET))
Consider using static SQL (pureQuery/SQLJ)
to get performance and security benefits over
dynamic SQL.
22. SSL and IPSec
SSL (Connection-based using HTTPS protocol)
DB2 for z/OS uses the z/OS Communication Server (z/OS
CS) IP Application Transparent Transport Layer service (AT-
TLS).
Configuration and setup required at both server and client.
To enable for Java connections, use
properties.put("sslConnection", "true”)
When using the db2dsdriver.cfg add <parameter
name="SecurityTransportMode" value="SSL"/>
IPSec (Host-based)
An open architecture for security at the IP networking-layer)
No application modifications
21
23. Trusted Context and Roles
A trusted context establishes a trusted relationship between DB2 and
another server (could be middleware or another DB2) by evaluating sets of
trusted attributes at Connect time.
A role groups together one or more privileges and can be assigned to users.
Roles are not available outside of the trusted connection.
Reduces the risk of shared app server ids.
Provides end-end auditing.
CREATE TRUSTED CONTEXT REMOTECTX
BASED UPON CONNECTION USING SYSTEM AUTHID WASADM1
ATTRIBUTES (ADDRESS '9.26.113.204’,
ADDRESS '$$IPEC1’,
SERVAUTH ‘EZB.NETACCESS.ZOSV1R5.TCPIP.IBM’,
ENCRYPTION ‘LOW’)
WITH USE FOR SAM, JOE WITH AUTHENTICATION
ENABLE;
24. DB2 zParm
CONDBAT: Max. # of distributed connection into DB2 system
– includes inactive and active connections, may be large
– DB2 queues DBAT requests to become active up to CONDBAT
MAXDBAT: Max # database access threads (DBATs) that can
be active concurrently.
– In many installations, max. value determined by available
storage in DBM1 (check IFCID 225)
– Set this value conservatively
CMTSTAT INACTIVE: Make a thread inactive after it
successfully commits or rolls back and thread does not hold
resources
– prerequisite for sysplex workload balancing
– inactive connections use less storage and free up DBM1
resources
25. DB2 zParms
IDTHTOIN: Time in sec an active server thread remain idle
before it is canceled
– inactive connections are not subject to idle thread timeout
– Strongly recommended to not set to 0 – disable, default works
well
TCPALVER: highly recommended to set to NO (default),
applies to TCP/IP only
TCPKPALV: ENABLE or time value in seconds - may need to
set a time value since TCP/IP stack default is 2 hours
POOLINAC: time duration that a DBAT remains pooled unless
it has received a new unit-of-work request
27. Pooling
Benefits
– Optimization of database attachment resources
– Less resources required: Memory, CPU, DBATs , network
Types
– DB2 Thread Pooling
• DB2 for z/OS
– DB2 Connection Pooling
• DB2 Connect and DB2 UDB for LUW Server products
• DB2 Clients and Drivers (limited scope)
– DB2 Connection Concentrator
• DB2 Connect and DB2 UDB for LUW Server products
• DB2 Clients and Drivers (limited scope)
26
28. DB2 Connection pooling
DB2 Connect DB2 for z/OS
Applications DDF DDF DBM1
= DB2 Client = Pool agent = Inactive = Pooled
connection DBAT
= Connected = Active = Active
agent connection DBAT
Allows reuse of an established connection for subsequent
connections
Reduction of cost associated with open + close connections
27
29. DB2 Connection pooling
Open connections kept in a pool
When application request a disconnection from the host, the
connection to the host is not dropped but given to the pool
Reduce CPU utilization in the host
Provide little advantage for long running connections (WAS)
For short and frequent txn (Web):
– $ connection > $ sql
– Reduce CPU and elapsed time per txn
Transparent to applications
Security: user identity information is passed along the thread
for user authentication
28
30. DB2 Connection pooling
Implementation:
– DB2 connect:
• num_poolagents:def=AUTO; =0 disable
• max_coordagents: def=AUTO; SQL1226 if exceeded
• DB2CONNECT_IN_APP_PROCESS must be set NO
– JDBC and SQLJ connecion pooling support
• Supported by the IBM DB2 Driver for JDBC abd SQLJ
• Connection pooling is transparent for the applications
• Homogeneus: all connection objects should have the same
properties
• Heterogeneus: connection objects with different properties
can share the same connection pool
29
31. Behavior compared
Using connection Not using connection
pooling pooling
New connect Reuses connection agent Creation of a new
connection agent
Commit The connection agent is The connection agent is
exclusively retained for exclusively retained for
the connection the connection
Disconnect Release of the Destruction of the
connection agent to pool connection agent: no
for reuse reuse
30
32. DB2 Connection Concentrator
DB2 Connect DB2 for z/OS
Applications DDF DBM1 DDF DBM1
= DB2 Client = Connection = Inactive = Pooled
connection DBAT
= Coordinator = Active = Active
agent connection DBAT
Allows applications to stay connected without any resource
utilization in DB2 for z/OS
Reduces resources required in z/OS increase scalability
Data Sharing: provides fail-safe operation and txn level load
balancing
31
33. DB2 Connection concentrator
Splits agents in 2 entities:
– Logical agents application connection
DB2 Connect
– Coordinating agents owns a DB2
DDF DBM1
connection and thread and executes
application requests
Allocates host database resources only for
the duration of an SQL transaction while
keeping user applications active
= Connection
The number of DB2 threads and the
= Coordinator
resources they consume can be much agent
smaller than if every application connection
had its own thread
32
34. DB2 Connection concentrator
Implementation:
– max_connections > max_coordagents
– Not the default
Restrictions
– Important restrictions apply: verify before implementing
– Application code changes may be necessary
– Not for txn WITH HOLD or KEEPDYNAMIC
– Must explicitly drop DTT
– Only dynamic SQL from CLI
– Dynamic prepare requests from embedded SQL not supported
– DB2 Connect: cannot use inbound SSL
33
35. Behavior compared
Using connection Not using connection
concentrator concentrator
New connect Reuses agent Creation of a new
connection agent
Commit Release agent to pool for The connection agent is
reuse exclusively retained for
the connection
Disconnect No impact on agent Destruction of the
connection agent: no
reuse
34
36. Connection pooling and Connection Concentrator
Similar but different objectives
Connection pooling
– helps reduce the overhead of database connections and handle
connection volume
– an application has to disconnect before another one can reuse a
pooled connection
Connection concentrator
– helps increase the scalability of DB2 for z/OS by optimizing the
use of your host database servers
– a connection may be available to an application as soon as
another application has finished a transaction and does not
require that other application to disconnect
35
38. Sysplex support
Challenge: a distributed application server needs to find the
best available path to the data
Dynamic virtual IP address (DVIPA):
– allows servers to be made available independently of hardware
or software failures
– allows multiple LPARs to appear to be a single, highly available
network host
– applications can be seamlessly moved from one LPAR to
another
Sysplex distributor (SD):
– combination of the high availability features of DVIPA and the
workload optimization capabilities of WLM
– work distribution based on a dynamically build priority list
39. DVIPA and Sysplex Distributor – the Concept
DVIPA provides a virtual TCP/IP address into the DS Group
Sysplex Distributor routes the connection request to the most
available member based on WLM recommendation
CF
DB2A DB2B
z/OS1 z/OS2
Sysplex
Distributor
Client
40. DVIPA and Sysplex Distributor: better toghether
Benefit:
– Connections are successful as long as one member is up
– Connection level workload balancing between members
– Setup is isolated to z/OS environment
Drawbacks
– SD on one lpar may route to a member on a different lpar, which
results into slightly higher response time compared to direct
member access
– Information about availability of data sharing members is only
considered at creation of a "new" connection but application
server typically maintain long-running connections.
41. DVIPA and Sysplex Distributor: usage
In non Data Sharing DRDA connection environment:
– Static VIPA or DVIPA is recommended for network resilience
In Data Sharing DRDA connection environment:
– Distributed DVIPA and Sysplex Distributor recommended for
high availability
Is there any additional benefit in using Sysplex WLB at
the application server or DB2 Connect if using DVIPA and
Sysplex Distributor on z/OS?
– YES! Both DVIPA and Sysplex Sistributor on z/OS and
Sysplex WLB on distributed components need to be
enabled to ensure highest availability
40
42. The server list
Vital element for Sysplex support
At each connection:
– The Sysplex provides a list of weighted priority information for
each connection address
– This list is used by DB2 Connect in order to distribute incoming
connections:
The server list is exploited by DB2 Connect and DB2 Clients
and Drivers for:
– Workload Balancing (WLB): new connection are routed to the
Sysplex member with the highest priority
– Fault Tolerance: try connection to other servers in the list in
descending priority order; an error is sent only if all connections
have failed
41
43. The server list
The server list can be explored with db2pd –sysplex
Sysplex List:
Count: 3
IP Address Port Priority Connections Status
9.12.6.70 38320 53 0 0
9.12.4.202 38320 53 0 0
9.12.6.9 38320 21 0 0
APAR PK80474 adds display server list to –DIS DDF DETAIL
-D9C1 DIS DDF DET
DSNL080I -D9C1 DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
…
DSNL081I STATUS=STARTD
DSNL100I LOCATION SERVER LIST:
DSNL101I WT IPADDR IPADDR
DSNL102I 45 ::9.12.4.105
DSNL102I 42 ::9.12.4.103
DSNL102I 18 ::9.12.4.104
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
42
44. The server list exploitation
There are no configuration Linux on z z/OS
parameters related to
DB2
enabling Sysplex WLB on Connect DB2
DB2 for z/OS server
Hipersockets
Used by DB2 Connect and
DB2 Clients
z/OS
zIIP awareness was
introduced in APAR DB2
PK38867 for DB2 V8 and 9
and z/OS 1R9
z/OS
There is not Hipersocket
awareness available (yet) DB2
43
45. DB2 Connect Sysplex support
Sysplex allows to:
– Load balancing: seamless balance connections across
different members of a data sharing group; If connection
concentrator is enabled transaction granularity
– Fault tolerance: Try alternate members in case of a member
failure: rerouting capability for Sysplex
Enabled by default
– but can be disabled if needed
– you can also establish Sysplex member affinities
Automatic client reroute for Sysplex (ACR) will retry the
connection in case of communication failure. Controlled by:
– DB2_MAX_CLIENT_CONNRETRIES
– DB2_CONNRETRIES_INTERVAL
– DB2TCP_CLIENT_CONTIMEOUT
44
46. Client Sysplex support
Sysplex Workload Balancing (also called transaction-level
load balancing) without having to go through DB2 Connect
– New WLB algorithm has built-in Connection Concentrator
Automatic Client Reroute (with seamless failover on
transaction boundaries)
– When one member of a Sysplex fails, client automatically
attempts to reconnect to another member.
– Application sees no errors (formerly SQL30081N returned)
Direct XA support (for XA TMs using single transport model
such as WAS)
– DB2 z/OS APAR PK69659 needs to be applied.
– enableDirectXA = true (db2dsdriver.cfg)
– Multi-transport models such as BEA Tuxedo not supported
47. Client Sysplex support
Sysplex support is configured using the db2dsdriver
configuration file. Use CA (GUI) with IBM Data Server Client
DB2 Connect migration the command db2dsgcfgfill
will create a db2dsdriver.cfg file with most of the
required information
WLB is NOT enabled by default: enableWLB is false
How db2dsdriver looks like:
<databases>
<database name="DB9C" host="wtsc63.itso.ibm.com" port="38320">
<WLB>
<parameter name="enableWLB" value="true"/>
<parameter name=”maxTransports value=”100”/>
</WLB>
<ACR>
<parameter name="enableACR" value="true"/>
</ACR>
</database>
</databases>
46
48. db2dsdriver.cfg
An XML configuration file that has to be manually edited and
used to specify settings for non-Java data server drivers
When CLI settings are specified in multiple places, they are
used in the following order:
1. Connection strings parameters
2. db2cli.ini file
3. db2dsriver.cfg file
Tip: Syntax errors are silently ignored in db2dsdriver.cfg. To
ensure your settings are used, update your database
configuration using diaglevel 4 through CLP or manually
update db2cli.ini and check db2diag.log for error messages
47
50. JCC Type 4 Sysplex Workload Balancing
JCC type 4 supports Sysplex Workload Balancing:
– JDBC 2.0 datasource since DB2 Connect V8 FP10; JCC 2.7.xx
– JDBC 1.2 DriverManager since DB2 Connect 9.5; JCC 3.50.xx
Typical DataSource Properties:
– enableSysplexWLB=YES: enables Sysplex WLB. Default is
false disabled
– maxTransportObjects: max # of connections to DB2 server
from this DataSource. Default value is -1 meaning no limit
Global properties defined in Global Properties File:
– db2.jcc.maxTransportObjects: max # of connections to DB2
server across all datasources. Default value is -1 no limit
– db2.jcc.maxTransportObjectIdleTime: time in sec a
connection stays idle before it is closed. Default 60 sec
49
52. Agenda
ONNECTIVITY
C
EST PRACTICES
B
OOLING
P
YSPLEX SUPPORT
S
ONCLUSIONS
C
53. New IBM Redbook
Architecture of DB2 distributed
systems
Distributed database
configurations
Installation and configuration
Security
Application programming
Data Sharing
Performance analysis
Problem determination