This document provides an overview of DFSMS Basics and Data Set Fundamentals. It discusses the structure of data sets on direct access storage devices (DASD) including volumes, tracks, cylinders, the volume table of contents (VTOC), catalogs, and data set names. It also summarizes the different types of data set organizations including non-VSAM (direct, sequential, partitioned) and VSAM (KSDS, ESDS, RRDS, linear) as well as newer technologies like PDSEs, HFS, and zFS. The document concludes with discussions on common data set uses, limitations, extended format features, and defining data set attributes in JCL or with IDCAMS.
by Dhanraj Pondicherry, Sr. Solutions Architecture Manager, AWS
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use work load management. Level: 300
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. By following a few best practices, you can take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to minimize I/O and deliver high throughput and query performance. This webinar will cover techniques to load data efficiently, design optimal schemas, and use work load management.
Learning Objectives:
• Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
• Learn how to migrate from existing data warehouses, optimize schemas, and load data efficiently
• Learn best practices for managing workload, tuning your queries, and using Amazon Redshift's interleaved sorting features
Who Should Attend:
• Data Warehouse Developers, Big Data Architects, BI Managers, and Data Engineers
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use workload management.
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use work load management.
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all of your data for a fraction of the cost of traditional data warehouses. In this webinar, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance.
Learning Objectives:
• Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
• Learn how to design schemas and load data efficiently
• Learn best practices for workload management, distribution and sort keys, and optimizing queries
by Dhanraj Pondicherry, Sr. Solutions Architecture Manager, AWS
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use work load management. Level: 300
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. By following a few best practices, you can take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to minimize I/O and deliver high throughput and query performance. This webinar will cover techniques to load data efficiently, design optimal schemas, and use work load management.
Learning Objectives:
• Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
• Learn how to migrate from existing data warehouses, optimize schemas, and load data efficiently
• Learn best practices for managing workload, tuning your queries, and using Amazon Redshift's interleaved sorting features
Who Should Attend:
• Data Warehouse Developers, Big Data Architects, BI Managers, and Data Engineers
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use workload management.
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use work load management.
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all of your data for a fraction of the cost of traditional data warehouses. In this webinar, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance.
Learning Objectives:
• Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
• Learn how to design schemas and load data efficiently
• Learn best practices for workload management, distribution and sort keys, and optimizing queries
SQL Server 2014 Memory Optimised Tables - AdvancedTony Rogerson
Hekaton is large piece of kit, this session will focus on the internals of how in-memory tables and native stored procedures work and interact – Database structure: use of File Stream, backup/restore considerations in HA and DR as well as Database Durability, in-memory table make up: hash and range indexes, row chains, Multi-Version Concurrency Control (MVCC). Design considerations and gottcha’s to watch out for.
The session will be demo led.
Note: the session will assume the basics of Hekaton are known, so it is recommended you attend the Basics session.
Data warehousing in the era of Big Data: Deep Dive into Amazon RedshiftAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all of your data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use work load management.
Best Practices for Data Warehousing with Amazon Redshift | AWS Public Sector ...Amazon Web Services
Get a look under the covers: Learn tuning best practices for taking advantage of Amazon Redshift's columnar technology and parallel processing capabilities to improve your delivery of queries and improve overall database performance. This session explains how to migrate from existing data warehouses, create an optimized schema, efficiently load data, use workload management, tune your queries, and use Amazon Redshift's interleaved sorting features.You’ll then hear from a customer who has leveraged Redshift in their industry and how they have adopted many of the best practices. Learn More: https://aws.amazon.com/government-education/
Get a look under the covers: Learn tuning best practices for taking advantage of Amazon Redshift's columnar technology and parallel processing capabilities to improve your delivery of queries and improve overall database performance. This session explains how to migrate from existing data warehouses, create an optimized schema, efficiently load data, use workload management, tune your queries, and use Amazon Redshift's interleaved sorting features.
Amazon Redshift é um serviço gerenciado que lhe dá um Data Warehouse, pronto para usar. Você se preocupa com carregar dados e utilizá-lo. Os detalhes de infraestrutura, servidores, replicação, backup são administrados pela AWS.
Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsaDsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexin
AWS June 2016 Webinar Series - Amazon Redshift or Big Data AnalyticsAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. By following a few best practices, you can take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to minimize I/O and deliver high throughput and query performance. This webinar will cover techniques to load data efficiently, design optimal schemas, and tune query and database performance.
Learning Objectives:
Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
Learn how to migrate from existing data warehouses, optimize schemas, and load data efficiently
Learn best practices for managing workload, tuning your queries, and using Amazon Redshift's interleaved sorting features
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
SQL Server 2014 Memory Optimised Tables - AdvancedTony Rogerson
Hekaton is large piece of kit, this session will focus on the internals of how in-memory tables and native stored procedures work and interact – Database structure: use of File Stream, backup/restore considerations in HA and DR as well as Database Durability, in-memory table make up: hash and range indexes, row chains, Multi-Version Concurrency Control (MVCC). Design considerations and gottcha’s to watch out for.
The session will be demo led.
Note: the session will assume the basics of Hekaton are known, so it is recommended you attend the Basics session.
Data warehousing in the era of Big Data: Deep Dive into Amazon RedshiftAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all of your data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use work load management.
Best Practices for Data Warehousing with Amazon Redshift | AWS Public Sector ...Amazon Web Services
Get a look under the covers: Learn tuning best practices for taking advantage of Amazon Redshift's columnar technology and parallel processing capabilities to improve your delivery of queries and improve overall database performance. This session explains how to migrate from existing data warehouses, create an optimized schema, efficiently load data, use workload management, tune your queries, and use Amazon Redshift's interleaved sorting features.You’ll then hear from a customer who has leveraged Redshift in their industry and how they have adopted many of the best practices. Learn More: https://aws.amazon.com/government-education/
Get a look under the covers: Learn tuning best practices for taking advantage of Amazon Redshift's columnar technology and parallel processing capabilities to improve your delivery of queries and improve overall database performance. This session explains how to migrate from existing data warehouses, create an optimized schema, efficiently load data, use workload management, tune your queries, and use Amazon Redshift's interleaved sorting features.
Amazon Redshift é um serviço gerenciado que lhe dá um Data Warehouse, pronto para usar. Você se preocupa com carregar dados e utilizá-lo. Os detalhes de infraestrutura, servidores, replicação, backup são administrados pela AWS.
Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsaDsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexing content for dsa dsa Dsa data indexing content for dsa dsa dsa Dsa data indexing content for dsa dsa Dsa data indexing content Dsa data indexin
AWS June 2016 Webinar Series - Amazon Redshift or Big Data AnalyticsAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. By following a few best practices, you can take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to minimize I/O and deliver high throughput and query performance. This webinar will cover techniques to load data efficiently, design optimal schemas, and tune query and database performance.
Learning Objectives:
Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
Learn how to migrate from existing data warehouses, optimize schemas, and load data efficiently
Learn best practices for managing workload, tuning your queries, and using Amazon Redshift's interleaved sorting features
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
1. DFSMS Basics:
Data Set Fundamentals
Get to Know Your Data Sets!
Neal Bohling and Tom Reed
DFSMS Defect Support @ IBM
August 7, 2014
Session Number 16119
4. DASD Structure
Modern Devices are Modeled after this architecture:
1 Track = 56664 Bytes
1 Cylinder = 15 Tracks
Modern Devices are Modeled after this architecture:
1 Track = 56664 Bytes
1 Cylinder = 15 Tracks
5. Data Sets
• Volumes provide a stream of data.
• We can reference by various methods..
– DASD by CCHHRR – Cylinder, Head, Record
– Relative Track (converts to Cyl, Head)
– Relative Byte (which converts to CCHHRR)
• But where do files begin and end?
1100100111000010110101001100100111100010111000111100100011000101110000101100010111100010111000111100001111010110110101001
1010111111001001110001111001001110101011100011111000011110101101101010011010111110000011101010111101000110010011100001011
0101001100100111100010111000111100100011000101110000101100010111100010111000111100001111010110110101001101011111100100111
0001111001001110101011100011111000011110101101101010011010111110000011101010111101000110010011100001011010100110010011110
0010111000111100100011000101110000101100010111100010111000111100001111010110110101001101011111100100111000111100100111010
1011100011111000011110101101101010011010111110000011101010111101000110010011100001011010100110010011110001011100011110010
0011000101110000101100010111100010111000111100001111010110110101001101011111100100111000111100100111010101110001111100001
1110101101101010011010111110000011101010111101000110010011100001011010100110010011110001011100011110010001100010111000010
6. Volume Table of Contents
• A Volume is a logically defined disk or tape
– Can be a real disk
– Usually is a virtual disk
• Volume Label (at Cyl0,Trk0)
– Points to VTOC
• Volume Table of Contents (VTOC)
– Comprised of DSCBs
• 10 types – Format 0-9
• For more info, check out
DFSMSdfp Advanced Services:
https://ibm.biz/BdF49T
– Maps out the entire drive
• Drive info (FMT4)
• Free space (FMT5/7)
• Data sets (FMT1/8/3)
• Now we can find any data set on the volume
7. Data Set Names
• Rules:
– Up to 22 qualifiers, each at least 1 character long
– Qualifiers separated by period (.)
– Up to 44 total characters long
– First character must be A-Z or National (#,$,@)
– Remaining can be alphanumeric, national, or hyphens
SYS1.PROCLIB.#BACKUP1
HLQ LLQ
2nd
Level Qualifier
8. Catalog
• A catalog is a data set that
keeps track of other data sets
• Ties DSN to Volume
• Managed by the
CATALOG address space
• Includes MASTER and
USER catalogs
• Now we can find any
data set in the system
• With shared catalogs,
any data set in the plex
Master Catalog
Request to find DS
Catalog Program / ASID
Volume
Data Set
9. Data Set Organization
Data sets are still just streams of bytes with no structure.
Finding information within that stream is difficult.
Solution: Data Set Organization
Defines how the data set is structured internally..
Two Main Types, with sub-types
Non-VSAM VSAM
Direct
Sequential
Partitioned
KSDS
ESDS
Linear
RRDS
10. Blocking
• Data Streams are logically divided into BLOCKS
which are further divided into RECORDS
• This is to reduce the number of I/Os
• Track Length: set by device (3390 is 56664 bytes/track)
• Block Size (BLKSIZE): Set by user or calculated automatically
• Record Size (LRECL): Set by the user
Data Stream on a disk (1 track)
Block 1 Block 2
Record 1 Record 2 Record 3 Record 4
12. Direct Organization
• Blocks are arranged by their control number
• No records, blocks are organized by the application
• Accessed via the BDAM access method
• Works like a hashtable – space is divided into even blocks
• Because not every entry may be used, some space may be wasted.
• Reads and writes are for whole blocks at a time
CONTROL
BLOCK [BLKSIZE]
1 DATASTRING ONE,
2 [empty]
3 IN A GALAXY FAR, FAR AWAY
4 42
13. Sequential Organization
• One of the most common organizations you'll see
• Data is split into blocks, which are split into records
• Records are arranged in the order they are written
• To add new records, you either:
– Rewrite the whole file
– Add to the end
• Accessed via the BSAM or QSAM access method
• Also comes in LARGE and EXTENDED formats
Block 1 Block 2 Block 3 Block 4
14. Partitioned Data Sets (PDS)
• Data is divided up into members
• Members are stored sequentially
• Members have unique names (1-8 characters)
• Directory Entries at the beginning of the data set
links member names to data locations
Directory Data
MEMBER1
MEMBER2
MEMBER3
MEMBER4
Member Data 1 M2
Member Data 2 (cont) Member Data 3
M3 Member Data 4
15. PDS Limitations
• Limited to one volume
• Prone to fragmentation:
– When a member is deleted, the directory entry is removed,
but the space remains unused leading to fragmentation
– Adding records to a member remove the old member and re-
writes it at the end
– Adding new members add to the end
• Eventually, a REORG is needed to rebuild the data set
and reclaim space
• Sharing can cause problems:
– Only one user can update at a time, but not enforced
16. PDS Extended (PDSE)
• Works relatively interchangeably with PDS
• Internal structure is different
• Advantages:
– Can reuse space (no more fragmentation)
– Can extend as needed (still limited to one volume)
– Directory is indexed, lowering seek time
– Members can be shared
• Can store program objects or data, but not both
• Comes in Version1 and Version2
• For more information, see Tom's presentations:
– 16126: PDSE Best Practices (Monday)
– 16127: PDSE Version 2 Member Generations (Wed)
17. Generation Data Groups
• Not a data set organization, but a catalog construct
• Groups of data sets organized by number (G0000V00)
• Allows easy tracking of multiple generations/version
• Must be non-VSAM, and must be CATALOGed
MY.DATA.G0005V00
MY.DATA.G0004V00
MY.DATA.G0003V00
MY.DATA.G0002V00
MY.DATA.G0001V00
MY.DATA.G0000V00
GDG: MY.DATA
Current entry (4)
MY.DATA(+1)
MY.DATA(-2)
19. Virtual Sequential Access Method (VSAM)
• Four sub-types:
– Key-Sequenced Data Sets (KSDS)
– Entry-Sequenced Data Sets (ESDS)
– Relative Record Data Sets (RRDS)
– Linear Data Sets (LDS)
• Instead of blocks, VSAM uses Control Intervals and Areas
Track
Control Interval (CI) Control Interval (CI)
Record 1 Record 2 Record 3 Record 4
Control Area (CA)
21. Key Sequenced Data Set
• Records contain a KEY and DATA
• That KEY is used to create an index
• The index provides direct access to any record
BOHLING........NEAL.........95116.......BANKACCOUNT.......
Record
INDEX DATA
BOHLING....... CI #1
CI #2
CI #3
CI #4
CI #5
KEY
BOHLING....NEAL...
22. KSDS Index Structure
CI1 3 10 22
CI2 1 2 3 CI3 5 7 10 CI4 12 15 22
Data CI (High Key 1) Data CI (HK 2) Data CI (HK 3) Data CI (HK5)
Data CI (HK7) Data CI (HK10) Data CI Data CI
INDEX
DATA
Two
Index
Levels
Sequence Set
24. VSAM Splits
• Split is when an INSERT won't fit in a CI
• About half of the data is moved to a new CI
CI1 3 10 22 40
15
Insert
CI1 3 10 CI1 22 40
15
SPLIT
25. Entry Sequenced Data Set
• No INDEX
• Records in order they were added
• Always add to the end
– No such thing as “delete”, only flagged “inactive”
– Empty spaces can never be used
• Access sequentially, or directly using RBA
• You can use an AIX to link keys to RBA
• Good for logs, bank transaction history, etc
26. Relative Record Data Set
• Pre-formatted fixed-length records
– Sequenced by relative number
– Slots can be used or unused (may have high fragmentation)
– Insert and access are by RRN (relative record number)
– Allows direct and sequential access
– Think Hash table
27. Variable Relative Record Data Set
• Similar to RRDS, but uses variable-length records
• Records are stored in relative number order
• Similar to a KSDS:
– Has an index that correspond RRN to RBA (CCHH)
– Uses SPLITs when inserting
INDEX CI
RRN1 Record #1
Record #2 Record #3
Record #4 [deleted] Record #5
RRN2
RRN3
RRN4
RRN5
28. Linear Data Set
• Byte-addressable storage (GET byte 15674)
• CI Size is multiple of 4096
• Essentially, a non-VSAM file with VSAM facilities
• Allows Data In Virtual
– Reads a 4k page into storage
– Lets the program access it as if it were memory
29. HFS and z/FS
• HFS - Hierarchical File System
– Used by UNIX to store directory structures
– Single-volume sequential data
– From z/OS, it looks similar to a PDS
– Unix system “mounts” them – think ISO file
• Z/FS – Newer version of HFS (z/OS 1.7)
– Better performance
– Uses VSAM Linear DS
• References:
– z/OS Unix System Service Planning, https://ibm.biz/BdF43v
– DFSMS Using Data Sets, https://ibm.biz/BdF43m
30. Common Uses
TYPE Common Use
Direct Data, not that common
Sequential Everything – logs, EREP, notes
PDS LOADLIBs, JCL collections, CLIST libraries
PDSE Same as PDS
KSDS DATA, such as bank records
ESDS Transaction logs
RRDS Data
Linear DB2 tables, SMS SCDS
HFS/zFS Unix directory structures
31. Limitations
• All format have limitations:
Limitation Direct/Seq VSAM PDS PDSE Unix
Max Size 65,535 tracks 4GB 65,535 TRK - -
Max Extents 16/vol 123/255 16 123 123/255
Max Volumes 59 59 1 1 1
Sharing Integ No Some No Some No
Fragmentation Yes/No Yes Yes Yes No
32. Extended Format
• Extended Format relieves some of the limitations
• Logically the same format
• Stored differently on the hardware to exploit hardware and
software facilities of SMS
• Must be SMS managed
• Enabled through allocation or data class parameter
• Allows some extra features:
– Compression
– Data Striping
– Extended Addressing (larger files)
– VSAM Allocation and Buffering
33. Extended Format Features
• Compression:
– Reduces space to store data
– Enabled via Data Class compaction attribute
– Works with Sequential and KSDS
– Session 16130, 16138, 15709 all talk more about it
• Striping:
– Distributes Data blocks across multiple volumes
– Allows higher throughput rate
– Works with VSAM and Sequential data sets
– Controlled by storage class parameters
34. Extended Format Features
• Extended Addressing:
– VSAM can grow to 4GB * CI Size (128TB for 32K CI)
– Sequential DS can have 123 extends per vol over 59 vols
– PDS, PDSE, Direct do not change
• VSAM Allocation / Buffering
– Partial Release
– System-Managed Buffering
– Note: Catalogs cannot be extended format
36. Defining Data Set Attributes
• Defining Non-VSAM
– JCL
– ISPF Panels
– Dynamic Allocation
• Defining VSAM
– IDCAMS
– JCL
– Dynamic Allocation
Parameter are roughly the same between utilities.
We'll focus on the Attribute
37. Data Set Attributes
• Space Units
– Defines which construct you'll use to allocate space
– Possible values:
• CYL – cylinders (=15 tracks, ~830KB)
• TRK – tracks (=56,664 bytes)
• BLKS – blocks
• KB, MB, BYTES
• Records
• Average Record Unit (AVGREC)
– Used primarily in Data Class
– Defines a multiple for Primary and Secondary space
– Possible values: U (unit), K (1024), M (104856)
38. Data Set Attributes
• Primary Space
– How much space to allocate in the first extent
– Specified in whole numbers
• Secondary Space
– Space to allocate when primary is exhausted
• Directory Blocks:
– Number of block to allocate for the PDS or PDSE directory
– Set to 0 (or leave blank) for non-PDS
39. Data Set Attributes
• Space attribute example (JCL):
SPACE=(TRK,(50,3,100),RLSE)
Units Primary
Secondary
Directory
Blocks
40. Non-VSAM Attributes
• Data Set Organization (DSORG)
– PS – Physical Sequential
– PO – Partitioned
– DA – Direct
– For absolute addressing, add a “U”, such as “PSU”
• Record Format (RECFM)
– Specifies characteristics of the records
– Example: FB, VB, VBS, U
Byte 1 Byte 2
F – Fixed Length B – Blocked Records
V – Variable Length S – Spanned Records
U – Undefined length BS – Blocked Spanned
41. Data Set Attributes
• Logical Record Length (LRECL/RECORDSIZE)
– Specifies the length, in bytes, of the records
– If variable length (VB), specifies the maximum length
– Has no effect for RECFM=U
• Block Size (BLKSIZE)
– Defines the size of the blocks to be used
– Specify 0 to use System-Determined Block Size
– Must be a multiple of LRECL
42. Non-VSAM Attributes
• Data Set Type (DSNTYPE)
– Defines the type of data set you are creating
– Possible values:
• LIBRARY – Partitioned Data Set Extended (PDSE)
• PDS – Partitioned Data Set
• HFS – Hierarchical File System
• LARGE – Creates a large-format sequential
• EXTREQ – Extended format, required
• EXTPREF – Extended format, preferred
• BASIC – Basic format sequential
• Blank – Sequential or PDS, depending on Directory field
– DSNTYPE relies on other parms, and does not override
43. VSAM Attributes
• RECORDSIZE
– Same as LRECL
– Defines the size of the records
• CONTROLINTERVALSIZE
– Defines the CI Size
– Similar to BLKSIZE
• FREESPACE
– Defines how much space is left in the CI for insert/expansion
• SPANNED/NONSPANNED
– Defines whether records are spanned (similar to S in FBS)
44. VSAM Attributes
• KEYS(length offset)
– Defines the length and offset of the VSAM key for KSDS
– Example: KEYS(14,0)
– Example 2: KEYS(5,28)
BOHLING........NEAL.........95116.......BANKACCOUNT.......
KEY
BOHLING........NEAL.........95116.......BANKACCOUNT.......
KEY
45. VSAM Attributes
• INDEXED / LINEAR / NONINDEXED / NUMBERED:
– Keyword that defines the VSAM file type
– INDEXED – KDSD
– LINEAR – linear
– NONINDEXED – ESDS
– NUMBERED – RRDS
• Many, many more...
– See DFSMS Access Method Services Commands
(SC23-6846-01)
48. Utilities
• IDCAMS
– Works with VSAM data sets
– Can do define, copy, delete
– Also interacts with catalog information
• IEBCOPY
– Copy PDS and PDSE
– Convert between PDS and PDSE
• IEBGENER
– Copy sequential files
• IEFBR14
– Does nothing, but can use DD cards to manage data sets
49. Utilities
• DFSMSdss (DSS)
– Very powerful data movement tool
– Does copy / backup / data conversion
• ISPF Option 3 panels
– Panel-driven utilities to create / delete / manage data sets
– Also has an editor that can edit sequential data sets
• ISMF Data Set Panels
– Allows edit / delete / rename / etc
– Allows you to save data sets lists
50. Introduction to Managing Data
• Storage Management Subsystem (SMS)
– Runs under the SMS ASID
– Helps storage administrators manage data sets
– Includes classes to simplify allocation and define attributes:
• DATA CLASS – defines JCL parms for default use
• STORAGE CLASS – defines accessibility and performance
requirements
• MANAGEMENT CLASS – defines retention and management
• STORAGE GROUP – defines which group of volumes
– Automatic Class Selection (ACS) routines assign classes
based on user-written logic
• Come to session 16115 for Hand-On ACS writing (Fri @ 10a)
– SMS-managed = has a storage class assigned
51. Subsystems Continued
• SMSPDSE / SMSPDSE1
– Address spaces needed to use PDSE
– Manage serialization on PDSE across the plex
– Enables system-wide buffering
– See sessions 16126 (Monday @ 4:15) for further info
• SMSVSAM
– Enables VSAM Record Level Sharing
– Enables cross-plex serialization at the record level
– Also has enhanced buffering and caching capabilities
– Not required for VSAM, but has benefits in cross-plex environments
– See sessions 16124 and 16125 (Mon/Tues) for more info
52. Subsystems
• Hierarchical Storage Management (HSM)
– Very powerful storage management tool
– Migrates / recalls data sets based on Retention parameters
– Create and maintains backups
– Allows you to keep important data on DASD
and roll old or less-used data to backup storage
– Retention values are set by Management Class (SMS)
– HSM control data sets are VSAM data sets
– For more, see sessions:
• CATALOG
– Provides the interface into the catalog data sets
– Runs in ASID CATALOG
53. Summary
• Two basic types of data sets – VSAM / NON-VSAM
• Many sub-varieties
– PDS/PDSE
– Sequential/Direct
– Fixed/Variable/Spanned Block
• Data set attributes define type and options
• Attributes reflect devices and format
• There are several utilities to help create / manage data
• Several subsystems can be enabled to assist and enable
additional features
54. DFSMS Basics:
Data Set Fundamentals
Get to Know Your Data Sets!
Neal Bohling and Tom Reed
DFSMS Defect Support @ IBM
August 7, 2014
Session Number 16119
56. Notices & Disclaimers
The performance data contained herein was obtained in a controlled, isolated environment. Actual results that may be obtained in
other operating environments may vary significantly. While IBM has reviewed each item for accuracy in a specific situation, there is
no guarantee that the same or similar results will be obtained elsewhere.
The responsibility for use of this information or the implementation of any of these techniques is a customer responsibility and
depends on the customer's or user's ability to evaluate and integrate them into their operating environment. Customers or users
attempting to adapt these techniques to their own environments do so at their own risk. IN NO EVENT SHALL IBM BE LIABLE
FOR ANY DAMAGE ARISING FROM THE USE OF THIS INFORMATION, INCLUDING BUT NOT LIMITED TO,
LOSS OF DATA, BUSINESS INTERRUPTION, LOSS OF PROFIT OR LOSS OF OPPORTUNITY.
Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or
other publicly available sources. IBM has not necessarily tested those products in connection with this publication and cannot
confirm the accuracy of performance, compatibility or another claims related to non-IBM products. Questions on the capabilities of
non-IBM products should be addressed to the suppliers of those products.
The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents
or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
Any statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals
and objectives only.
57. Trademarks
DFSMSdfp, DFSMSdss, DFSMShsm, DFSMSrmm, IBM, IMS, MVS, MVS/DFP, MVS/ESA, MVS/SP, MVS/XA,
OS/390, SANergy, and SP are trademarks of International Business Machines Corporation in the United States,
other countries, or both.
AIX, CICS, DB2, DFSMS/MVS, Parallel Sysplex, OS/390, S/390, Seascape, and z/OS are registered trademarks
of International Business Machines Corporation in the United States, other countries, or both.
Domino, Lotus, Lotus Notes, Notes, and SmartSuite are trademarks or registered trademarks of Lotus
Development Corporation. Tivoli, TME, Tivoli Enterprise are trademarks of Tivoli Systems Inc. in the United
States and/or other countries.
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both. UNIX is a registered trademark in the United States and other countries licensed exclusively
through The Open Group.
Other company, product, and service names may be trademarks or service marks of others.