Mainframe Technology Overview

1,999 views

Published on

From time to time, there is a need to modify information systems due to changes in legislation (like SOX), standards, currency change (like the euro), and more. These types of changes have a substantial impact on many components of an information system and therefore contain a high risk factor.

Published in: Education
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,999
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
63
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • Notes:
  • Methodology is SUN AM C-Scan language bits and bytes is SUN PM The major special purpose “exit” is MON AM; this includes exercise time The special purpose files that we use to analyze and convert systems, with their corresponding reports is MON PM Other special purpose “exits” is TUES AM; this includes exercise time, which will extend into the afternoon A walk through the conversion process including a case study you will do on your own is WED and THURS AM The Summary is THURS PM No homework, but you’ll find that reading through the reference material and reviewing your class notes will be helpful.
  • We sell service, not technology It is the combination of our people, our methodology and our tools that we provide that delivers our service
  • Describe the flow process of all 6 phases A questionnaire covers the management, development and maintenance processes, the existing systems and applications, the hardware, data and system software, the current practices, and IS principles. A survey and assessment is completed to identify the Year 2000 date affected components (software, hardware, procedures, databases, etc.). Clusters, data bridges and interfaces are also identified for conversion. The conversion process includes multiple phases in an iterative process, fine-tuning the conversion controls until compilable code is achieved. Intra-cluster tests (unit and regression) are performed. Intra-cluster tests are completed on all components within the cluster. Acceptance testing is a client-driven testing process that demonstrates the application’s ability to meet the acceptance criteria previously defined with the client.
  • Benefits: Maximize automation of the conversion process. Minimize interference with Production Systems Maintenance. Minimize code freezing period. Progressive conversion of logically linked subsystems (clusters). Conversion process transparent to end user. Capabilities: Produces a database of the organization’s software inventory. Produces a database of all date fields and their cross references. Provides a large range of software inventory cross reference reports. Provides computerized templates that control the conversion process. Provides automated update capabilities that support db and application logic changes. Toolbox automated analysis, conversion, management, and control services simplifies the conversion process, significantly minimizing risks usually involved with such a large project. Consistently automates conversions minimizing human mistakes that are difficult to detect. Enables the correction of errors by applying changes across multiple applications. Develops control parameters to use as input to perform the actual conversion. Changes to program do not affect the performance of conversion. Customer retains control over source code, thereby minimizing freeze time.
  • Benefits: Maximize automation of the conversion process. Minimize interference with Production Systems Maintenance. Minimize code freezing period. Progressive conversion of logically linked subsystems (clusters). Conversion process transparent to end user. Capabilities: Produces a database of the organization’s software inventory. Produces a database of all date fields and their cross references. Provides a large range of software inventory cross reference reports. Provides computerized templates that control the conversion process. Provides automated update capabilities that support db and application logic changes. Toolbox automated analysis, conversion, management, and control services simplifies the conversion process, significantly minimizing risks usually involved with such a large project. Consistently automates conversions minimizing human mistakes that are difficult to detect. Enables the correction of errors by applying changes across multiple applications. Develops control parameters to use as input to perform the actual conversion. Changes to program do not affect the performance of conversion. Customer retains control over source code, thereby minimizing freeze time.
  • The first step of scanning the environment is to create a dataset of recent log activities. Depending on the client’s archiving procedures, up to 15 months of the system log files may be processed. The purpose is to identify the active jobs, programs and on-line transactions in order to reduce the conversion repository to only active components.. In this step the DBLOG dataset is created out of SMF and other monitor log files, such as Tmon, Omegamon, etc. The results are a list of jobs, programs and transactions including statistics of how often each has been used. The jobs should be tailored according to the site’s archiving procedure and system monitor type. In the case of a third party’s log manager such as MXG, a job should be tailored according to their reports. The based assumption is that these reports were verified by the client and found to be accurate.
  • Methodology is SUN AM C-Scan language bits and bytes is SUN PM The major special purpose “exit” is MON AM; this includes exercise time The special purpose files that we use to analyze and convert systems, with their corresponding reports is MON PM Other special purpose “exits” is TUES AM; this includes exercise time, which will extend into the afternoon A Walk Through the conversion process including a case study you will do on your own is WED and THURS AM The Summary is THURS PM No homework, but you’ll find that reading through the reference material and reviewing your class notes will be helpful.
  • INIT - Example: INIT (BUILDW(10000' ')) to initialize work area with blanks. TERM - to calculate and write summary values of fields. INITMEM - example: INITMEM(BUILD(1:$$MEMBER)) to save member name. TERMMEM - Same as TERM but the summaries are per member. PROCESS - OUTREC - process a record from input. Give example on the board!!! APPEND - write example of a report!!! HEADER - create information for page header lines. TRAILER - Same as above for trailer lines. KEYS - example: KEYS(1,8,HEADER('DETAILS FOR':1,8,/))
  • Examples should be prepared for those variables that need them.
  • Methodology is SUN AM C-Scan language bits and bytes is SUN PM The major special purpose “exit” is MON AM; this includes exercise time The special purpose files that we use to analyze and convert systems, with their corresponding reports is MON PM Other special purpose “exits” is TUES AM; this includes exercise time, which will extend into the afternoon A Walk Through the conversion process including a case study you will do on your own is WED and THURS AM The summary is THURS PM No homework, but you’ll find that reading through the reference material and reviewing your class notes will be helpful.
  • Survey questionnaires are distributed to the client: Site Preliminary Questionnaire One of the first contacts with the client will be to initiate a request for general information about the client’s environment. This will be done through a site Preliminary Questionnaire. The purpose of the questionnaire is to understand the complexity and quantities of the most important components of the client’s environment. The questionnaire also identifies the main programming languages, the main DBMSs, system monitors, and naming conventions. Use the document to identify components not yet supported by the Toolbox. System Questionnaires Each site will have many applications, projects or systems being processed. A System Questionnaire should be distributed to each client resource responsible for an application, project or system. The purpose of the questionnaire is to collect information about every active application system or project that exists on site and is valid for the conversion process. This will be done by obtaining the naming conventions per system and a list of libraries, files, databases and I/O modules. It is needed for setting up the tools for performing the survey and for clustering the systems. The information from the system questionnaires will also be used for learning the client’s environment. The information will be compared with the actual components found through scanning the environment using IT Discovery. Reports are included with IT Discovery for reporting discrepancies in the client information.
  • This illustrates the process flow during Survey and Assessment using IT Discovery.
  • Properties of Objects Each object in AppBuilder has a set of properties or attributes that describe it. Some properties are common to all object types (for example Name and System Id), but for the most part, each object type has a different set of properties. In other words, where a Field might have a data format and length, a Rule would not; it would have an execution environment. At this early stage of your AppBuilder learning, you are not expected to know all the properties of all the object types you will encounter, they will become apparent as you progress through the course. All you need to know at the moment is that when you create an object the properties are set to default values, which you may well have to change. To see the properties for any object, press Alt + Enter from the hierarchy diagram.
  • The goal of this process is to create a relationship between a program and date fields. The process is primarily automated through IT Discovery jobs and includes the following steps: Repository Files Merged This process involves merging the DBDTSCAN files. The language oriented DBDTSCAN files will be merged into a common global repository file. Repository Files Populated The final process of building the repository involves populating the repository file. In this step the unnormalized DBDTSCAN dataset is exploded to many additional dataset such as DBCOPY, DBCALL, DBCALLED, etc. Duplicate information is deleted and information about the same entity is merged from various records. Repository Transactions Completed This process will complete the transaction information for the repository. The DBDTSCAN repository includes information about relationships between programs and transactions. This information is merged into DBTRAN. Up to this point in the process, the transaction repository includes information about the relations between transactions and programs. The objective of this step is to complete the information regarding the relations between on-line programs and files (DDnames DSnames or DBnames).
  • What is an Object? All objects have the following five properties: General Properties Audit - who, when, where etc... Remote Audit - when created on the Enterprise repository. Text - description of the object Keywords - help with searching for objects Some objects such as RULES also have source code associated with them.
  • Methodology is SUN AM C-Scan language bits and bytes is SUN PM The major special purpose “exit” is MON AM; this includes exercise time The special purpose files that we use to analyze and convert systems, with their corresponding reports is MON PM Other special purpose “exits” is TUES AM; this includes exercise time, which will extend into the afternoon A Walk Through the conversion process including a case study you will do on your own is WED and THURS AM The summary is THURS PM No homework, but you’ll find that reading through the reference material and reviewing your class notes will be helpful.
  • The first step of scanning the environment is to create a dataset of recent log activities. Depending on the client’s archiving procedures, up to 15 months of the system log files may be processed. The purpose is to identify the active jobs, programs and on-line transactions in order to reduce the conversion repository to only active components.. In this step the DBLOG dataset is created out of SMF and other monitor log files, such as Tmon, Omegamon, etc. The results are a list of jobs, programs and transactions including statistics of how often each has been used. The jobs should be tailored according to the site’s archiving procedure and system monitor type. In the case of a third party’s log manager such as MXG, a job should be tailored according to their reports. The based assumption is that these reports were verified by the client and found to be accurate.
  • The first step of scanning the environment is to create a dataset of recent log activities. Depending on the client’s archiving procedures, up to 15 months of the system log files may be processed. The purpose is to identify the active jobs, programs and on-line transactions in order to reduce the conversion repository to only active components.. In this step the DBLOG dataset is created out of SMF and other monitor log files, such as Tmon, Omegamon, etc. The results are a list of jobs, programs and transactions including statistics of how often each has been used. The jobs should be tailored according to the site’s archiving procedure and system monitor type. In the case of a third party’s log manager such as MXG, a job should be tailored according to their reports. The based assumption is that these reports were verified by the client and found to be accurate.
  • The first step of scanning the environment is to create a dataset of recent log activities. Depending on the client’s archiving procedures, up to 15 months of the system log files may be processed. The purpose is to identify the active jobs, programs and on-line transactions in order to reduce the conversion repository to only active components.. In this step the DBLOG dataset is created out of SMF and other monitor log files, such as Tmon, Omegamon, etc. The results are a list of jobs, programs and transactions including statistics of how often each has been used. The jobs should be tailored according to the site’s archiving procedure and system monitor type. In the case of a third party’s log manager such as MXG, a job should be tailored according to their reports. The based assumption is that these reports were verified by the client and found to be accurate.
  • The first step of scanning the environment is to create a dataset of recent log activities. Depending on the client’s archiving procedures, up to 15 months of the system log files may be processed. The purpose is to identify the active jobs, programs and on-line transactions in order to reduce the conversion repository to only active components.. In this step the DBLOG dataset is created out of SMF and other monitor log files, such as Tmon, Omegamon, etc. The results are a list of jobs, programs and transactions including statistics of how often each has been used. The jobs should be tailored according to the site’s archiving procedure and system monitor type. In the case of a third party’s log manager such as MXG, a job should be tailored according to their reports. The based assumption is that these reports were verified by the client and found to be accurate.
  • The first step of scanning the environment is to create a dataset of recent log activities. Depending on the client’s archiving procedures, up to 15 months of the system log files may be processed. The purpose is to identify the active jobs, programs and on-line transactions in order to reduce the conversion repository to only active components.. In this step the DBLOG dataset is created out of SMF and other monitor log files, such as Tmon, Omegamon, etc. The results are a list of jobs, programs and transactions including statistics of how often each has been used. The jobs should be tailored according to the site’s archiving procedure and system monitor type. In the case of a third party’s log manager such as MXG, a job should be tailored according to their reports. The based assumption is that these reports were verified by the client and found to be accurate.
  • The first step of scanning the environment is to create a dataset of recent log activities. Depending on the client’s archiving procedures, up to 15 months of the system log files may be processed. The purpose is to identify the active jobs, programs and on-line transactions in order to reduce the conversion repository to only active components. In this step the DBLOG dataset is created out of SMF and other monitor log files, such as Tmon, Omegamon, etc. The results are a list of jobs, programs and transactions including statistics of how often each has been used. The jobs should be tailored according to the site’s archiving procedure and system monitor type. In the case of a third party’s log manager such as MXG, a job should be tailored according to their reports. The based assumption is that these reports were verified by the client and found to be accurate.
  • Methodology is SUN AM C-Scan language bits and bytes is SUN PM The major special purpose “exit” is MON AM; this includes exercise time The special purpose files that we use to analyze and convert systems, with their corresponding reports is MON PM Other special purpose “exits” is TUES AM; this includes exercise time, which will extend into the afternoon A Walk Through the conversion process including a case study you will do on your own is WED and THURS AM The Summary is THURS PM No homework, but you’ll find that reading through the reference material and reviewing your class notes will be helpful.
  • Mainframe Technology Overview

    1. 1. Mainframe Technology Overview March 2008
    2. 2. Mainframe Technology Overview <ul><li>BluePhoenix Mainframe Architecture </li></ul><ul><li>C-Scan </li></ul><ul><li>Toolbox and IT Discovery </li></ul><ul><li>Repository Files </li></ul><ul><li>IT Discovery Database and Query Facility </li></ul>
    3. 3. BluePhoenix Mainframe Architecture <ul><li>Integration of the following: </li></ul><ul><li>Methodology – Designed to solve the problem systematically with little disruption to the client. </li></ul><ul><li>C-Scan – Conversion engine with its own interpreted language components and exits. </li></ul><ul><li>ISPF Panels and REXX – Workflow engine that enforces the methodology and controls processing (batch and online). </li></ul><ul><li>Mini-Scheduler – Controls batch job flows. </li></ul><ul><li>Toolbox – JCL and programs in C-Scan and REXX for analysis and conversion. </li></ul><ul><li>Repository – Set of detailed tables, describing the environment and containing rules for conversion. </li></ul>
    4. 4. BluePhoenix Mainframe Architecture Inventory Reports Generation of Tools Conversion Unit Test System Test Implementation Repository Enhancement Repository Files Specific Cluster Metadata Libraries Generation Parameters Libraries Converted Libraries Implementation Parameters
    5. 5. C-Scan <ul><li>Assembler Interpreter – Provides some of the assembler abilities, such as dynamic file allocation and edit of PDS directory. </li></ul><ul><li>Event Driven Engine – Driven by record-by-record automated reading, with logic controlled by events or keys. </li></ul><ul><li>Writes Records – Creates records with implicit and explicit location setting of fields; is able to write and execute its own code. </li></ul><ul><li>Flexible – Has COBOL-like structures for multiple file processing; Record Merge; Subprogram. </li></ul><ul><li>Easy to Learn – Simple and consistent syntax. </li></ul>
    6. 6. The Toolbox <ul><li>Global Assessment/Impact Analysis – Performs inventory definition and analysis for migration or field adjustment projects. </li></ul><ul><li>FieldEnabler – Performs conversions of field lengths and formats automatically while protecting the corporate knowledge asset. </li></ul><ul><li>COBOL/LE-Enabler – Performs COBOL/LE conversions. </li></ul><ul><li>DBMSMigrator – Performs migration from non-relational to relational databases and from their access language to SQL (DBMSMigrator is a PC-based tool). </li></ul>
    7. 7. Converting Programs, Files, Control Statements, and JCL <ul><li>Only C-Scan produced C-Scan scripts are used. </li></ul><ul><li>The toolbox is self-documenting and controlled, and comments converted source. </li></ul>
    8. 8. Global Assessment and Field Adjustment <ul><li>Global Assessment </li></ul><ul><li>IT Discovery Product made up of C-Scan programs (and others) used for the static understanding of the application components within a full system environment. </li></ul><ul><li>Field Adjustment </li></ul><ul><li>The Toolbox Tools made up of C-Scan programs (and others) used for the analysis and eventual conversion of specific system components through dynamic propagation of fields tracking data within and between programs. </li></ul>Tools and Products C-Scan
    9. 9. Toolbox Selection Screen (1 of 3)
    10. 10. Toolbox Selection Screen (2 of 3)
    11. 11. Toolbox Selection Screen (3 of 3)
    12. 12. Some Repository Files DBJCL DBDSN01 DBSOURCE DBDTSCAN DBPGROUT DBLOAD DBTRAN DBCALL INVLIST
    13. 13. Mainframe Technology Overview <ul><li>BluePhoenix Mainframe Architecture </li></ul><ul><li>C-Scan </li></ul><ul><li>Toolbox and IT Discovery </li></ul><ul><li>Repository Files </li></ul><ul><li>IT Discovery Database and Query Facility </li></ul>
    14. 14. Entry-points (Events) <ul><li>INIT - Before the first record of the input. </li></ul><ul><li>TERM - After the end of the input file. </li></ul><ul><li>INITMEM - Before reading the first record of a member of a library. </li></ul><ul><li>TERMMEM - After the last record was read from a member of a library. </li></ul><ul><li>OUTREC - After reading each record. </li></ul><ul><li>HEADER - At the beginning of a page when generating reports. </li></ul><ul><li>TRAILER - At the end of a page when generating reports. </li></ul><ul><li>KEYS - When the defined keys are changed. </li></ul>
    15. 15. Power of C-Scan <ul><li>Some of the Available Variables </li></ul><ul><li>$$WORK Work area for that C-Scan step </li></ul><ul><li>$$GLOB Work area that bridges multiple C-Scan steps </li></ul><ul><li>$$RECORD The input record </li></ul><ul><li>$$OUTREC The output record </li></ul><ul><li>$$DSN The DSN on the input DD statement - PDS or SEQ </li></ul><ul><li>$$MEMBER The member name within a PDS </li></ul><ul><li>$$PNVMEM The member name of a PANVALET data set </li></ul><ul><li>$$VOLSER The VOLSER of the input dataset </li></ul><ul><li>$$DATE The current date in installation format </li></ul><ul><li>$$DATER The current date in format “mmm, dd yyyy” </li></ul><ul><li>$$TIME The current time (HH:MM:SS) </li></ul><ul><li>$$JOB The name of the job specified on the JOB statement </li></ul><ul><li>$$STEP The name of the step from the EXEC statement </li></ul><ul><li>$$UID The UserID that last updated the member </li></ul>
    16. 16. Power of C-Scan <ul><li>INDEX Function </li></ul><ul><li>Searches for a string in the specified part of the source record, can use specialized wildcards. </li></ul><ul><li>Returns 0 to indicate false (string not found) or position of the found string. </li></ul><ul><li>Creates the following variables: </li></ul><ul><ul><li>$$IDXA The text specified by the INDEX function. </li></ul></ul><ul><ul><li>$$IDXB The text following the text that was found. </li></ul></ul><ul><ul><li>$$IDXC The text preceding the text that was found. </li></ul></ul>
    17. 17. Power of C-Scan <ul><li>Some Tables </li></ul><ul><li>Used to delimitate fields (set pointers) </li></ul><ul><li>Continue as long as… </li></ul><ul><ul><li>ALU Uppercase Alphabetic </li></ul></ul><ul><ul><li>ALN Alphanumeric (Uppercase) </li></ul></ul><ul><ul><li>NUM Number </li></ul></ul><ul><ul><li>DSN Characters acceptable for dataset names </li></ul></ul><ul><ul><li>COB Characters acceptable for COBOL field names </li></ul></ul><ul><ul><li>N… Continue as long as it is not … </li></ul></ul><ul><li>Also specified characters and wildcards </li></ul>
    18. 18. Power of C-Scan <ul><li>Use of Index, Tables, and Wildcards </li></ul><ul><li>From a source library, create a list of fields with a PIC of X(50), along with the member name </li></ul>//SYSCIN DD * S010_RPT BUILD,DD1=IN010,DD2=OUT010,PARMDD=P010, D=|#+?|,SORTO='(1,8,CH,A,10,32,CH,A)' //P010 DD * OUTREC ( IF (INDEX(6,72,' ?PIC ?X(50)'),EQ,0) EXIT(LEAVEREC) IF ($$IDXC(1,COB,32),EQ,'FILLER') EXIT(LEAVEREC) BUILD($$MEMBER,10:$$IDXC(1,COB,32),/) )
    19. 19. C-Scan in Production – JCL
    20. 20. C-Scan in Production – Control Statements
    21. 21. C-Scan in Production – Control Statements
    22. 22. C-Scan in Production – XTBLDREC
    23. 23. Mainframe Technology Overview <ul><li>1. BluePhoenix Mainframe Architecture </li></ul><ul><li>2. C-Scan </li></ul><ul><li>3. Toolbox and IT Discovery </li></ul><ul><li>4. Repository Files </li></ul><ul><li>5. IT Discovery Database and Query Facility </li></ul>
    24. 24. The Mainframe Tool Libraries <ul><li>JOB (JFIX; JUSER) – JCL and Mini-Scheduler lists </li></ul><ul><li>PARM (PFIX; PUSER) – Programs and control tables written in C-Scan </li></ul><ul><li>REXX – Mini-Scheduler and Screens </li></ul><ul><li>JCLFIX – JCL templates for situation specific jobs </li></ul>
    25. 25. Controlling the Flow – Screens
    26. 26. Controlling The Flow – Batch: The Mini-Scheduler <ul><li>All jobs are submitted with the same JOBNAME and JOBCLASS </li></ul><ul><li>Jobs have two steps added, STEP00 as the first step and STEPFF as the last: </li></ul><ul><ul><li>STEPFF writes to the LOG that the JOB successfully completed (based on Condition Code less than 5) </li></ul></ul><ul><ul><li>STEP00 causes a failure (System 806-4) if the previous JOB’s STEPFF did not write to the LOG (based on Run Type) </li></ul></ul><ul><li>This permits: </li></ul><ul><ul><li>High Level of Automation </li></ul></ul><ul><ul><li>High Number of Jobs – easy to debug </li></ul></ul>
    27. 27. What We Need to Know About a Program <ul><li>Language </li></ul><ul><li>Files </li></ul><ul><li>Other programs with same files </li></ul><ul><li>JCL (or OLTP) used </li></ul><ul><li>Routines called </li></ul><ul><li>Routines calling </li></ul><ul><li>Copies </li></ul><ul><li>Loads </li></ul><ul><li>DBMS structure </li></ul><ul><li>Etc. </li></ul>
    28. 28. What We Need to Know
    29. 29. What We Need to Know: Collect JCL with PROCs Source JCL ITD External Reader OPJCL PROCLIB ITD DBJCL
    30. 30. Cross Referencing Components <ul><li>Cross-referencing loads, JCL, CICS, and source </li></ul><ul><li>Identifying “missing” entities – sources not run, JCL or TRANS without source </li></ul><ul><li>There is a turnaround screen to link load and source where names differ </li></ul>
    31. 31. Multi-Language Interface Meta-COBOL #SRV22 JCL; PROCs; PARMs; CSD; DBMS; Load Repository Reports Date Fields xxxx sssss xxxx sssss xxxx sssss Records xxxx sssss xxxx sssss xxxx sssss PL/I Easytrieve Source and Copy COBOL Master Repository Inventory Assembler Other (4 th GL)
    32. 32. Revise Commands, Conflicts, and Fixes Screen
    33. 33. IT Discovery <ul><li>Provides understanding of the mainframe operational and application environments, their inventory and interrelationships. </li></ul><ul><li>Accesses, analyses, cross-references software components on an MVS mainframe. </li></ul><ul><li>Makes sense of the spaghetti that is an MVS application environment. Finds redundancies, missing entities, and inefficient interconnections. </li></ul><ul><li>Creates a DB2 database on the components reportable through SQL. </li></ul><ul><li>Includes a network based query facility (QF). </li></ul>
    34. 34. BluePhoenix IT Discovery Static Information Collection OLTP and Batch SOURCE and COPIES IT Discovery Repository Flat Files JCL; PROC; Control Statements LOAD Modules CICS Tables (CSD) IMS/DC Database Definitions DB2; IMS; IDMS Job Schedulers IT Discovery Repository DB2 Reports Queries Network Query Facility Web Queries
    35. 35. Process Stages Output Process Stage Input <ul><li>DB2 Repository </li></ul>Process Relational Database 4 <ul><li>Database parameter definitions </li></ul><ul><li>Repository Files </li></ul><ul><li>Updated Repository </li></ul><ul><li>Reports </li></ul><ul><li>Turnaround: </li></ul><ul><li>Resolve variables containing routine names </li></ul><ul><li>Resolve variables containing DD names </li></ul>Analyze Source Entities 3 <ul><li>Production source programs and copy libraries </li></ul><ul><li>Repository Files </li></ul><ul><li>Populated Repository </li></ul><ul><li>Reports </li></ul><ul><li>Turnaround: </li></ul><ul><li>Resolve Load without corresponding source </li></ul>Analyze Inventory Components 2 <ul><li>Production source programs and copy libraries </li></ul><ul><li>Production JCL, PROC and Control statement libraries </li></ul><ul><li>Production Load libraries </li></ul><ul><li>Production OLTP control data </li></ul><ul><li>Production DB formats </li></ul><ul><li>Customized environment </li></ul><ul><li>Empty Repository datasets </li></ul>Build Environment 1 <ul><li>Product libraries </li></ul><ul><li>Customer parameters </li></ul>
    36. 36. Analyze Source Entities <ul><li>This step does the in-depth static analysis of the source programs. </li></ul><ul><li>ITD builds a Repository file that lists and details entities and relationships from within COBOL programs to the “outside world” including flat files, VSAM, DB2, IMS, IDMS and routines. </li></ul><ul><li>Other languages are analyzed without routine calls. </li></ul><ul><li>The Repository file includes detailed information on the interfaces between routines and between programs and data. </li></ul>
    37. 37. Analyze Source Entities <ul><li>Turnaround Screens </li></ul><ul><li>“Variable Routines” Where a routine is called using a variable field for the name, and that variable is not defined as a constant in the Data Division. </li></ul><ul><li>“Variable DDNAMEs” Where a DDNAME is a variable field, and that variable is not defined as a constant in the Data Division. </li></ul>
    38. 38. Query Facility Architecture IT Discovery Repository (DB2) QF Server QF Client QF Client QF Client QF Client
    39. 39. ITD – Incremental Running IT Discovery Collection Scripts Control Datasets Catalog IT Discovery Repository C-Scan Engine DB Definitions Changes, Additions and Deletions ONLY Libraries
    40. 40. Mainframe Technology Overview <ul><li>BluePhoenix Mainframe Architecture </li></ul><ul><li>C-Scan </li></ul><ul><li>Toolbox and IT Discovery </li></ul><ul><li>Repository Files </li></ul><ul><li>IT Discovery Database and Query Facility </li></ul>
    41. 41. DBSOURCE Dataset (1 of 4) DBSOURCE
    42. 42. DBSOURCE Dataset (2 of 4) DBSOURCE
    43. 43. DBSOURCE Dataset (3 of 4) DBSOURCE
    44. 44. DBSOURCE Dataset (4 of 4) DBSOURCE
    45. 45. IT Discovery – DB2 Repository
    46. 46. IT Discovery – DB2 Repository
    47. 47. Tbinvlist Table (1 of 2)
    48. 48. Tbinvlist Table (2 of 2)
    49. 49. Mainframe Technology Overview <ul><li>BluePhoenix Mainframe Architecture </li></ul><ul><li>C-Scan </li></ul><ul><li>Toolbox and IT Discovery </li></ul><ul><li>Repository Files </li></ul><ul><li>IT Discovery Database and Query Facility </li></ul>
    50. 50. Query Facility Architecture <ul><li>Thin Client – Users have no software installed and access the server via a browser (intranet) </li></ul><ul><li>Server – Operates software that: </li></ul><ul><ul><li>Provides screens and interfaces to user </li></ul></ul><ul><ul><li>Manages users, queries, and access to queries </li></ul></ul><ul><ul><li>Passes SQL queries from clients to mainframe (via DB2Connect) and returns results </li></ul></ul><ul><li>Mainframe – Contains ITD Relational Database (DB2) </li></ul>
    51. 51. Query Facility Users <ul><li>Administrator – Connects repositories; maintains Users, Public, and Shared query folders. </li></ul><ul><li>Power User – Can create and run own queries in Private folder. Can copy to Shared folder. Can run queries from Public and Shared, and copy to Private. </li></ul><ul><li>Regular User – Can only run queries from Public folder. </li></ul>
    52. 52. Query Facility Folders <ul><li>Public – Categories and queries available; “Read Only” to everyone; maintained by the Administrator. </li></ul><ul><li>Shared – Categories and queries available; “Read, Write, Not Update” to Power Users (if enabled) and the Administrator. </li></ul><ul><li>Private – Queries only available; “Read, Write, and Update” to Power Users and Administrator. </li></ul>
    53. 53. Query Facility – Categories and Queries <ul><li>Categories – Groupings of Queries </li></ul><ul><li>Categories belong in Folders </li></ul><ul><li>Queries – Use standard DB2 SQL </li></ul><ul><li>Queries can have drill down menus from specific results to other nested queries </li></ul>
    54. 54. Query Facility – Categories and Queries
    55. 55. Query Facility – Queries
    56. 56. Query Facility – Running and Drilling <ul><li>Queries are run by hitting EXECUTE </li></ul><ul><li>Parameter is per SQL – avoid using only % </li></ul><ul><li>Drill down menus are available on underlined results </li></ul><ul><li>Nested Queries available through the drill down menus </li></ul>
    57. 57. Query Facility – Running Queries
    58. 58. Query Facility – Drilling Down
    59. 59. Query Facility – Nested Query
    60. 60. Query Facility – Filters and Export <ul><li>Results columns can be filtered for viewing </li></ul><ul><li>Results can be exported to XML; CSV; Excel; HTML </li></ul>
    61. 61. Query Facility – Filtering Results
    62. 62. Query Facility – Exporting Results
    63. 63. Query Facility – Exporting Results
    64. 64. Query Facility – Creating Queries <ul><li>Creation by Power Users and Administrators </li></ul><ul><li>Must be in Category </li></ul><ul><li>Can create drill-down menus </li></ul><ul><li>Can link drill-down menus </li></ul>
    65. 65. Query Facility – Query Creation
    66. 66. Query Facility – Drill-Down Linking
    67. 67. Query Facility – Online Documentation
    68. 68. Query Facility – Online Documentation
    69. 69. Thank You!

    ×