This document discusses datasets, catalogs, and VSAM datasets in IBM mainframe systems. It describes that datasets can have VSAM or non-VSAM organization, and lists the types of non-VSAM datasets like PS, PDS, PDSE. It also provides details about VSAM datasets like KSDS, ESDS, RRDS, their components and how they are defined using IDCAMS utility. Finally, it explains what catalogs are and the different types of catalogs used.
Maintec Technologies provides affordable access to latest Mainframe infrastructure for development, training, additional short term capacity increases and outsourcing needs. Leverage our Mainframe infrastructure, without investing the time, expense and manpower it would take to build your own!.
Maintec Technologies provides affordable access to latest Mainframe infrastructure for development, training, additional short term capacity increases and outsourcing needs. Leverage our Mainframe infrastructure, without investing the time, expense and manpower it would take to build your own!.
Contains information about the DB2 DSNZPARM that forms the DB2 configuration parameters. All about the different types of zPARMs. A way to update them dynamically.
DB2 for z/OS and DASD-based Disaster Recovery - Blowing away the mythsFlorence Dubois
Is your Disaster Recovery solution based on DASD replication functions? In most cases, all you will need to do is a normal restart of DB2 for z/OS. But this assumes the DASD copy is consistent. Otherwise, it is guaranteed data corruption that will have to be fixed up, possibly several weeks or months after the event. This presentation will tell you everything you need to know about the Copy Services for IBM System z and what is required to ensure data consistency. It will address the most common myths and misconceptions about these DASD replication solutions. It will also provide hints and tips on how to tune for fast DB2 restart and how to optimise GRECP/LPL recovery.
Best practices for DB2 for z/OS log based recoveryFlorence Dubois
The need to perform a DB2 log-based recovery of multiple objects is a very rare event, but statistically, it is more frequent than a true disaster recovery event (flood, fire, etc). Taking regular backups is necessary but far from sufficient for anything beyond minor application recovery. If not prepared, practiced and optimised, it can lead to extended application service downtimes – possibly many hours to several days. This presentation will provide many hints and tips on how to plan, design intelligently, stress test and optimise DB2 log-based recovery.
JCL
Job Control Language (JCL) is a name for scripting languages used On IBM mainframe operating systems to instruct the system on how to run a batch job or start a subsystem.
JCL acts as an interface between application programming and MVS Operating system.
Jcl is used for compilation and execution of batch programs.
Apart from the above functionalities JCL can also be used for,
1. Controlling the jobs.
2. Create GDG’S.
3. Allocate PDS,PS file with IBM Utilities.
4. Create Procs.
5. Sort the files.
JCL Coding Sheet
1,2,3----------Column Numbers----------72,73------------80
//JOBNAME JOB PARAMETERS------COMMENTS
// EXEC
// DD
//* ------------ Comment (* in 3rd column indicates line in comment)
//------------ End of JCL
Where // ----- Identification Field
job name------- Naming field
JOB,EXEC,DD - Statement / Operation
NOTE
If we want to continue parameters in the next line end the last parameter with “,” and continue next parameter only in 4-16 columns.
There are three statements in JCL.
JOB
EXEC
DD
JOB Statement:
Job statement is used to identify job name and job related parameters
JOBCARD = job name + job related parameters.
Syntax
//JOBNAME JOB ACCOUNT INFORMATION,’USERNAME’,CLASS=A-Z/0-9,
// NOTIFY =&SYSUID/RACF ID,MSGCLASS,
// MSGLEVEL=(X,Y),PRTY=0-15,
// TIME=(M,S),REGION=MB/KB,TYPRUN=SCAN/
// HOLD/COPY,COND=(RC,OPERATOR,STEPNAME)
// COND=ONLY OR COND=EVEN,RESTART=STEPNAME
JOBNAME
It is required to identify this job from other jobs in the SPOOL
1 to 8 characters minimum 1 character and maximum is 8 character.
1st character must be alphabet.
Other characters can be alphabets or numeric or $,&,#.
Example
Job names for personal or lab sessions
Userid + 1 / 2 chars
KC03P83$ ------- Userid is KC03P83
KC03P84&--------Userid is KCO3P84
ACCOUNTING INFORMATION
It is a keyword parameter and codes it after JOB statement.
It is used for billing purpose, in real time when we submit any job it is going to take some CPU time. Based on the CPU time there will be some amount involved where this amount has to go will be decided by A/C information parameters.
Examples
(8012T)
(80121I)
(8012M)
USERNAME
It is used to identify the user who has written the JCL.
It can be maximum of 20 characters.
Note Both A/C information and user name are positional parameters and the remaining job
card parameters are keyword parameters.
EX1: //KC03P83A JOB (487A),’JANAKI RAM’
EX2: //KC03P84& JOB (488T),’SOMISETTY’
NOTIFY
To which user id the job has to be notification after successful or unsuccessful completion.
Successful completion means MAXCC = 0 (or) 04 unsuccessful completion means MAXCC > 04.
If it is not coded,then user has to check the status of the job from the spo
This slide contains all the basic concepts of ISPF. It's giving the simple and easy step to get the knowledge of Interactive system productivity facility. If u like it then give me feedback on email anilbharti85@gmail.com Thanks v much.
A K Bharti
This presentation covers the basic DB2 objects description. Covers the basic administration using IBM utilities. Their complete phase wise operation and termination recoveries. Also have talked about the most frequently used DB2 catalog tables, what's the need for them in DB2. And finally have shown some SPUFI panels and their usage.
A Parallel Sysplex is a cluster of IBM mainframes acting together as a single system image with z/OS that cooperate, using certain hardware and software products, to process work. Used for disaster recovery, Parallel Sysplex combines data sharing and parallel computing to allow a cluster of up to 32 mainframes to share a workload for high performance and high availability.Parallel Sysplex is analogous in concept to a UNIX cluster – allow the customer to operate multiple copies of the operating system as a single system. This allows systems to be added or removed as needed, while applications continue to run.
Geographically Dispersed Parallel Sysplex (GDPS) is an end to end application availability solution. It is the ultimate disaster recovery and continuous availability solution for a multi-site enterprise. GDPS disk replication technology enhances the resiliency and provides continues availability of data by masking disk outages.
Contains information about the DB2 DSNZPARM that forms the DB2 configuration parameters. All about the different types of zPARMs. A way to update them dynamically.
DB2 for z/OS and DASD-based Disaster Recovery - Blowing away the mythsFlorence Dubois
Is your Disaster Recovery solution based on DASD replication functions? In most cases, all you will need to do is a normal restart of DB2 for z/OS. But this assumes the DASD copy is consistent. Otherwise, it is guaranteed data corruption that will have to be fixed up, possibly several weeks or months after the event. This presentation will tell you everything you need to know about the Copy Services for IBM System z and what is required to ensure data consistency. It will address the most common myths and misconceptions about these DASD replication solutions. It will also provide hints and tips on how to tune for fast DB2 restart and how to optimise GRECP/LPL recovery.
Best practices for DB2 for z/OS log based recoveryFlorence Dubois
The need to perform a DB2 log-based recovery of multiple objects is a very rare event, but statistically, it is more frequent than a true disaster recovery event (flood, fire, etc). Taking regular backups is necessary but far from sufficient for anything beyond minor application recovery. If not prepared, practiced and optimised, it can lead to extended application service downtimes – possibly many hours to several days. This presentation will provide many hints and tips on how to plan, design intelligently, stress test and optimise DB2 log-based recovery.
JCL
Job Control Language (JCL) is a name for scripting languages used On IBM mainframe operating systems to instruct the system on how to run a batch job or start a subsystem.
JCL acts as an interface between application programming and MVS Operating system.
Jcl is used for compilation and execution of batch programs.
Apart from the above functionalities JCL can also be used for,
1. Controlling the jobs.
2. Create GDG’S.
3. Allocate PDS,PS file with IBM Utilities.
4. Create Procs.
5. Sort the files.
JCL Coding Sheet
1,2,3----------Column Numbers----------72,73------------80
//JOBNAME JOB PARAMETERS------COMMENTS
// EXEC
// DD
//* ------------ Comment (* in 3rd column indicates line in comment)
//------------ End of JCL
Where // ----- Identification Field
job name------- Naming field
JOB,EXEC,DD - Statement / Operation
NOTE
If we want to continue parameters in the next line end the last parameter with “,” and continue next parameter only in 4-16 columns.
There are three statements in JCL.
JOB
EXEC
DD
JOB Statement:
Job statement is used to identify job name and job related parameters
JOBCARD = job name + job related parameters.
Syntax
//JOBNAME JOB ACCOUNT INFORMATION,’USERNAME’,CLASS=A-Z/0-9,
// NOTIFY =&SYSUID/RACF ID,MSGCLASS,
// MSGLEVEL=(X,Y),PRTY=0-15,
// TIME=(M,S),REGION=MB/KB,TYPRUN=SCAN/
// HOLD/COPY,COND=(RC,OPERATOR,STEPNAME)
// COND=ONLY OR COND=EVEN,RESTART=STEPNAME
JOBNAME
It is required to identify this job from other jobs in the SPOOL
1 to 8 characters minimum 1 character and maximum is 8 character.
1st character must be alphabet.
Other characters can be alphabets or numeric or $,&,#.
Example
Job names for personal or lab sessions
Userid + 1 / 2 chars
KC03P83$ ------- Userid is KC03P83
KC03P84&--------Userid is KCO3P84
ACCOUNTING INFORMATION
It is a keyword parameter and codes it after JOB statement.
It is used for billing purpose, in real time when we submit any job it is going to take some CPU time. Based on the CPU time there will be some amount involved where this amount has to go will be decided by A/C information parameters.
Examples
(8012T)
(80121I)
(8012M)
USERNAME
It is used to identify the user who has written the JCL.
It can be maximum of 20 characters.
Note Both A/C information and user name are positional parameters and the remaining job
card parameters are keyword parameters.
EX1: //KC03P83A JOB (487A),’JANAKI RAM’
EX2: //KC03P84& JOB (488T),’SOMISETTY’
NOTIFY
To which user id the job has to be notification after successful or unsuccessful completion.
Successful completion means MAXCC = 0 (or) 04 unsuccessful completion means MAXCC > 04.
If it is not coded,then user has to check the status of the job from the spo
This slide contains all the basic concepts of ISPF. It's giving the simple and easy step to get the knowledge of Interactive system productivity facility. If u like it then give me feedback on email anilbharti85@gmail.com Thanks v much.
A K Bharti
This presentation covers the basic DB2 objects description. Covers the basic administration using IBM utilities. Their complete phase wise operation and termination recoveries. Also have talked about the most frequently used DB2 catalog tables, what's the need for them in DB2. And finally have shown some SPUFI panels and their usage.
A Parallel Sysplex is a cluster of IBM mainframes acting together as a single system image with z/OS that cooperate, using certain hardware and software products, to process work. Used for disaster recovery, Parallel Sysplex combines data sharing and parallel computing to allow a cluster of up to 32 mainframes to share a workload for high performance and high availability.Parallel Sysplex is analogous in concept to a UNIX cluster – allow the customer to operate multiple copies of the operating system as a single system. This allows systems to be added or removed as needed, while applications continue to run.
Geographically Dispersed Parallel Sysplex (GDPS) is an end to end application availability solution. It is the ultimate disaster recovery and continuous availability solution for a multi-site enterprise. GDPS disk replication technology enhances the resiliency and provides continues availability of data by masking disk outages.
Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsaDsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexin
This presentation gives an overview of Databases and Term used in used in Databases Aspect. It also, help you to understand the clear description of Database Learning. Best Suited for Beginners and advanced level learners.
This is a preliminary study and the objective of this study is to make simple distributed database system with some basic tutorials. Cassandra is a distributed database from Apache that is highly scalable and designed to accomplish very large amounts of organized data. Without having a single point of failure, it offers high accessibility. This report highlights with a basic outline of Cassandra trailed by its architecture, installation, and significant classes and interfaces. Subsequently, it proceeds to cover how to perform operations such as CREATE, ALTER, UPDATE, and DELETE on KEYSPACES, TABLES, and INDEXES using CQLSH using C#/.NET Client with a sample program done by ASP.NET(C#).
AWS June 2016 Webinar Series - Amazon Redshift or Big Data AnalyticsAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. By following a few best practices, you can take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to minimize I/O and deliver high throughput and query performance. This webinar will cover techniques to load data efficiently, design optimal schemas, and tune query and database performance.
Learning Objectives:
Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
Learn how to migrate from existing data warehouses, optimize schemas, and load data efficiently
Learn best practices for managing workload, tuning your queries, and using Amazon Redshift's interleaved sorting features
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
2. Datasets
• Collection of data records which are logically connected.
• Every Dataset has an unique name which can be of 44 characters.
• Maximum 22 name segments separated by ‘.’ can be used. Overall
length is inclusive of the ‘.’.
• First segment of the name is called High Level Qualifier (HLQ),
Last segment is called Low Level Qualifier (LLQ) and the remaining
are called Middle Level Qualifiers.
3. Types of Datasets
• VSAM and Non-VSAM.
• VSAM Datasets – ESDS, KSDS, RRDS, and LDS
• Non-VSAM Datasets – PS, PDS, and PDSE
4. Non-VSAM Datasets
• PS – Physical Sequential – Contains Data stored inside it.
• PDS and PDSE – Partitioned Dataset and PDS(Extended) – Contains
sequential datasets in which the data is stored. These datasets
inside the PDS or PDSE are called PDS Members.
• PDS and PDSE are also called Libraries.
5. Non-VSAM Datasets (Contd..) – PS vs PDS
• PS – The records are grouped into BLOCKS. A Block is a basic unit that can
be read by the system into memory while processing the dataset. It is a
good idea to group as more records as possible for efficient processing of
data.
• PDS – The members are grouped into Directory Blocks. This determines
the number of members that can be created as each directory block can
have approximately 6 members.
6. Non-VSAM Datasets (Contd..) – PDS vs PDSE
PDS PDSE
Partitioned Dataset Partitioned Dataset Extended
Dataset Organisation – PO Dataset Organisation – LIBRARY
Can be stored on Tape and DASD Can be stored only on DASD
Slower Directory searches Faster Directory searches
Free space is not reused automatically,
manual compression required
Free space is automatically reused, no
manual compression required
Space is allocated in 16 extents Space is allocated in 123 extents
7. Non –VSAM Datasets (Contd..)
• Datasets are allocated space in extents of units (Cylinders, Tracks
etc.).
• Extents refer to the number of chunks of contiguous storage space
allocated at a time.
• If a dataset is allocated a space of 20 primary and 10 secondary
quantities of Cylinders, then there will be 1*20 + 15*10 = 170 Cylinders
of space allocated.
8. VSAM Datasets
• VSAM – Virtual Storage Access Method
• There are 4 types of VSAM Datasets – KSDS, ESDS, RRDS, and LDS
• Stored only on DASD for faster access.
• VSAM datasets have components like Cluster, Control Interval and Control Area.
• Cluster further has Data and Index components based on the type.
• VSAM records are grouped into Control Intervals (CI).
• CI is a smallest unit of data that can be swapped between DASD and memory for
processing.
• Cis are further grouped into Control Areas (CA).
• IDCAMS is the IBM utility used to Define, Delete, Rename, Copy etc the VSAM
datasets
9. VSAM vs Non-VSAM
Non-VSAM VSAM
PS KSDS, ESDS, RRDS, LDS
IEBGENER, IEFBR14 etc. IDCAMS
Stored on DASD and Tape Stored only on DASD
Records are grouped into Blocks Records are grouped into CI and CA
10. VSAM Datasets - KSDS
• KSDS – Key-Sequenced Dataset
• It has data and index components.
• Records in KSDS are uniquely identified by a key, arranged in key
sequential order.
• Data component contains records, where as the Index component
is used for faster access of the data in Data component.
• Records can be accessed sequentially or dynamically by supplying
the key value.
• Keyword ‘INDEXED’ is sued to create KSDS Datasets.
11. VSAM Datasets – ESDS
• Entry Sequential Dataset.
• It has only Data component.
• Records are arranged in the order they are inserted, identified by their
physical address – Relative Byte Address (RBA). If record length is 20,
then 0 is the RBA for first record(0-19), 20 is the RBA for second
record(20-39).
• Records can be accessed sequentially or dynamically by supplying the
RBA value.
• Keyword ‘NONINDEXED’ is used to create ESDS datasets.
• Records in the ESDS cannot be deleted but can be marked inactive.
12. VSAM Datasets - RRDS
• Relative Record Dataset.
• It has only Data component.
• Space allocated is divided into slots of fixed length, records of
fixed length or variable length can be inserted into the slots.
• Each record is identified by RRN – Relative Record Number which is
similar to serial numbers.
• Records can be accessed sequentially or dynamically by supplying
the RRN number.
• Keyword ‘NUMBERED’ is sued to create RRDS datasets.
13. Creation of VSAM through a JCL
//EXAMPLE JOB(VSAMJCL,XXXXXX),CLASS=A,MSGCLASS=A
//STEP1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DEFINE CLUSTER (NAME(EXT1RRV.VSAM.DATASET) -
INDEXED/NONINDEXED/NUMBERED - --> Specific to VSAM type
<<<Other parameters specific to type of VSAM>>>
DATA (NAME(EXT1RRV.VSAM.DATASET.DATA)) - --> For all types of VSAMs
INDEX (NAME(EXT1RRV.VSAM.DATASET.INDEX)) --> Only for KSDS VSAM
/*
14. Catalogs
• There are system critical datasets and user datasets in any environment.
• Catalogs provide the facility of isolating system datasets from user
datasets.
• A Catalog is a VSAM dataset that maintains records of all the other
datasets and the volumes in which they are stored.
• VTOC - Every Volume has a Table Of Contents that tells the system about
the physical address of the datasets stored inside it.
• In order to access a dataset, system reads the catalog to find the volume
in which it is stored and then reads the VTOC of that volume.
15. Types of Catalogs
• There are two types of catalogs – Master and User
• Master catalog maintains all the system datasets. Master Catalog is
created during the system generation and is stored in system
volume.
• User/Application datasets are separated from system datasets by
creating User catalogs. Aliases for user datasets are mapped to a
user catalog.
• Alias is nothing but the HLQ of the datasets but can be multilevel
as well.
• LISTCAT in front of a user dataset displays the user catalog.