A IBM® Debug Tool for z/OS® permite examinar, monitorar e controlar a execução de programas C, C++, COBOL e PL/I. Esse programa sofisticado fornece múltiplos pontos de interrupção condicionais e não condicionais, depuração em modo de etapas e a capacidade de monitorar e atualizar variáveis e armazenamento. A Debug Tool é uma ferramenta de depuração de nível de origem interativa para aplicativos compilados em uma variedade de ambientes de desenvolvimento.
How To Master PACBASE For Mainframe In Only Seven DaysSrinimf-Slides
IBM VisualAge Pacbase is a code-switching structured programming language that is developed and maintained by IBM. VisualAge Pacbase runs on both IBM and non-IBM mainframes and integrates with IBM WebSphere Studio Application Developer. When compiling Pacbase code it is first translated into COBOL and then compiled to binary.
Capital One Delivers Risk Insights in Real Time with Stream Processingconfluent
Speakers: Ravi Dubey, Senior Manager, Software Engineering, Capital One + Jeff Sharpe, Software Engineer, Capital One
Capital One supports interactions with real-time streaming transactional data using Apache Kafka®. Kafka helps deliver information to internal operation teams and bank tellers to assist with assessing risk and protect customers in a myriad of ways.
Inside the bank, Kafka allows Capital One to build a real-time system that takes advantage of modern data and cloud technologies without exposing customers to unnecessary data breaches, or violating privacy regulations. These examples demonstrate how a streaming platform enables Capital One to act on their visions faster and in a more scalable way through the Kafka solution, helping establish Capital One as an innovator in the banking space.
Join us for this online talk on lessons learned, best practices and technical patterns of Capital One’s deployment of Apache Kafka.
-Find out how Kafka delivers on a 5-second service-level agreement (SLA) for inside branch tellers.
-Learn how to combine and host data in-memory and prevent personally identifiable information (PII) violations of in-flight transactions.
-Understand how Capital One manages Kafka Docker containers using Kubernetes.
Watch the recording: https://videos.confluent.io/watch/6e6ukQNnmASwkf9Gkdhh69?.
Mastering Chaos - A Netflix Guide to MicroservicesJosh Evans
QConSF 2016 Abstract:
By embracing the tension between order and chaos and applying a healthy mix of discipline and surrender Netflix reliably operates microservices in the cloud at scale. But every lesson learned and solution developed over the last seven years was born out of pain for us and our customers. Even today we remain vigilant as we evolve our service architecture. For those just starting the microservices journey these lessons and solutions provide a blueprint for success.
In this talk we’ll explore the chaotic and vibrant world of microservices at Netflix. We’ll start with the basics - the anatomy of a microservice, the challenges around distributed systems, and the benefits realized when integrated operational practices and technical solutions are properly leveraged. Then we’ll build on that foundation exploring the cultural, architectural, and operational methods that lead to microservice mastery.
BASE24 classic is a rock-solid system for processing ATM and credit card transactions. However, the technology behind BASE24 is decades old and for that reason BASE24 classic does not well integrate into modern Interprise IT environments, also there are shortcomings in compliance areas.
This presentations shows how BASE24 classic can easily be modernized - without having to touch any source code file and thus keeping the "Kernel" unchanged.
How To Master PACBASE For Mainframe In Only Seven DaysSrinimf-Slides
IBM VisualAge Pacbase is a code-switching structured programming language that is developed and maintained by IBM. VisualAge Pacbase runs on both IBM and non-IBM mainframes and integrates with IBM WebSphere Studio Application Developer. When compiling Pacbase code it is first translated into COBOL and then compiled to binary.
Capital One Delivers Risk Insights in Real Time with Stream Processingconfluent
Speakers: Ravi Dubey, Senior Manager, Software Engineering, Capital One + Jeff Sharpe, Software Engineer, Capital One
Capital One supports interactions with real-time streaming transactional data using Apache Kafka®. Kafka helps deliver information to internal operation teams and bank tellers to assist with assessing risk and protect customers in a myriad of ways.
Inside the bank, Kafka allows Capital One to build a real-time system that takes advantage of modern data and cloud technologies without exposing customers to unnecessary data breaches, or violating privacy regulations. These examples demonstrate how a streaming platform enables Capital One to act on their visions faster and in a more scalable way through the Kafka solution, helping establish Capital One as an innovator in the banking space.
Join us for this online talk on lessons learned, best practices and technical patterns of Capital One’s deployment of Apache Kafka.
-Find out how Kafka delivers on a 5-second service-level agreement (SLA) for inside branch tellers.
-Learn how to combine and host data in-memory and prevent personally identifiable information (PII) violations of in-flight transactions.
-Understand how Capital One manages Kafka Docker containers using Kubernetes.
Watch the recording: https://videos.confluent.io/watch/6e6ukQNnmASwkf9Gkdhh69?.
Mastering Chaos - A Netflix Guide to MicroservicesJosh Evans
QConSF 2016 Abstract:
By embracing the tension between order and chaos and applying a healthy mix of discipline and surrender Netflix reliably operates microservices in the cloud at scale. But every lesson learned and solution developed over the last seven years was born out of pain for us and our customers. Even today we remain vigilant as we evolve our service architecture. For those just starting the microservices journey these lessons and solutions provide a blueprint for success.
In this talk we’ll explore the chaotic and vibrant world of microservices at Netflix. We’ll start with the basics - the anatomy of a microservice, the challenges around distributed systems, and the benefits realized when integrated operational practices and technical solutions are properly leveraged. Then we’ll build on that foundation exploring the cultural, architectural, and operational methods that lead to microservice mastery.
BASE24 classic is a rock-solid system for processing ATM and credit card transactions. However, the technology behind BASE24 is decades old and for that reason BASE24 classic does not well integrate into modern Interprise IT environments, also there are shortcomings in compliance areas.
This presentations shows how BASE24 classic can easily be modernized - without having to touch any source code file and thus keeping the "Kernel" unchanged.
Message broker is a method to distribute the information across server. Recently, message broker used to build a distributed system, to scale up massive data distribution in this Information Era. Kafka is one of message broker tools that emerge recently to data streaming. This slide explain the benefit of message broker and the benefit of Kafka for a good quality of data distribution.
This slide is exported from Ms. Power Point to PDF.
DB2 for z/OS - Starter's guide to memory monitoring and controlFlorence Dubois
DB2 for z/OS makes more and more use of REAL memory to improve performance and reduce cost. But if you don't carefully budget and monitor the use of REAL memory on your system, you could be putting your applications at risk. This presentation will go back to the basics and answer the most common questions about REAL memory management including: how does DB2 uses virtual and REAL memory? how to build a budget based on system settings and buffer pool sizes? how to size the LFAREA? what are the key performance indicators and how do I know I am running 'safely'? what can be done to protect the system?
Mainframe Integration, Offloading and Replacement with Apache KafkaKai Wähner
Video recording of this presentation:
https://youtu.be/upWzamacOVQ
Blog post with more details:
https://www.kai-waehner.de/blog/2020/04/24/mainframe-offloading-replacement-apache-kafka-connect-ibm-db2-mq-cdc-cobol/
Mainframes are still hard at work, processing over 70 percent of the world’s most essential computing transactions every day. Very high cost, monolithic architectures, and missing experts are the key challenges for mainframe applications. Time to get more innovative, even with the mainframe!
Mainframe offloading with Apache Kafka and its ecosystem can be used to keep a more modern data store in real-time sync with the mainframe. At the same time, it is persisting the event data on the bus to enable microservices, and deliver the data to other systems such as data warehouses and search indexes.
But the final goal and ultimate vision are to replace the mainframe by new applications using modern and less costly technologies. Stand up to the dinosaur, but keep in mind that legacy migration is a journey! Kai will guide you to the next step of your company’s evolution!
You will learn:
- how to not only reduce operational expenses but provide a path for architecture modernization, agility and eventually mainframe replacement
- what steps some of Confluent’s customers already took, leveraging technologies like Change Data Capture (CDC) or MQ for mainframe offloading
- how an event streaming platform enables cost reduction, architecture modernization, and a combination of a mainframe with new technologies
Building an Active-Active IBM MQ Systemmatthew1001
Shows how message availability and service availability can be configured to reduce downtime and improve overall availability of your MQ network. Demonstrates how Uniform Clusters can be used to help keep your service availability high.
Contains information about the DB2 DSNZPARM that forms the DB2 configuration parameters. All about the different types of zPARMs. A way to update them dynamically.
HA/DR options with SQL Server in Azure and hybridJames Serra
What are all the high availability (HA) and disaster recovery (DR) options for SQL Server in a Azure VM (IaaS)? Which of these options can be used in a hybrid combination (Azure VM and on-prem)? I will cover features such as AlwaysOn AG, Failover cluster, Azure SQL Data Sync, Log Shipping, SQL Server data files in Azure, Mirroring, Azure Site Recovery, and Azure Backup.
Apache Kafka vs. Integration Middleware (MQ, ETL, ESB)Kai Wähner
Learn the differences between an event-driven streaming platform and middleware like MQ, ETL and ESBs – including best practices and anti-patterns, but also how these concepts and tools complement each other in an enterprise architecture.
Extract-Transform-Load (ETL) is still a widely-used pattern to move data between different systems via batch processing. Due to its challenges in today’s world where real time is the new standard, an Enterprise Service Bus (ESB) is used in many enterprises as integration backbone between any kind of microservice, legacy application or cloud service to move data via SOAP / REST Web Services or other technologies. Stream Processing is often added as its own component in the enterprise architecture for correlation of different events to implement contextual rules and stateful analytics. Using all these components introduces challenges and complexities in development and operations.
This session discusses how teams in different industries solve these challenges by building a native streaming platform from the ground up instead of using ETL and ESB tools in their architecture. This allows to build and deploy independent, mission-critical streaming real time application and microservices. The architecture leverages distributed processing and fault-tolerance with fast failover, no-downtime rolling deployments and the ability to reprocess events, so you can recalculate output when your code changes. Integration and Stream Processing are still key functionality but can be realized in real time natively instead of using additional ETL, ESB or Stream Processing tools.
Message broker is a method to distribute the information across server. Recently, message broker used to build a distributed system, to scale up massive data distribution in this Information Era. Kafka is one of message broker tools that emerge recently to data streaming. This slide explain the benefit of message broker and the benefit of Kafka for a good quality of data distribution.
This slide is exported from Ms. Power Point to PDF.
DB2 for z/OS - Starter's guide to memory monitoring and controlFlorence Dubois
DB2 for z/OS makes more and more use of REAL memory to improve performance and reduce cost. But if you don't carefully budget and monitor the use of REAL memory on your system, you could be putting your applications at risk. This presentation will go back to the basics and answer the most common questions about REAL memory management including: how does DB2 uses virtual and REAL memory? how to build a budget based on system settings and buffer pool sizes? how to size the LFAREA? what are the key performance indicators and how do I know I am running 'safely'? what can be done to protect the system?
Mainframe Integration, Offloading and Replacement with Apache KafkaKai Wähner
Video recording of this presentation:
https://youtu.be/upWzamacOVQ
Blog post with more details:
https://www.kai-waehner.de/blog/2020/04/24/mainframe-offloading-replacement-apache-kafka-connect-ibm-db2-mq-cdc-cobol/
Mainframes are still hard at work, processing over 70 percent of the world’s most essential computing transactions every day. Very high cost, monolithic architectures, and missing experts are the key challenges for mainframe applications. Time to get more innovative, even with the mainframe!
Mainframe offloading with Apache Kafka and its ecosystem can be used to keep a more modern data store in real-time sync with the mainframe. At the same time, it is persisting the event data on the bus to enable microservices, and deliver the data to other systems such as data warehouses and search indexes.
But the final goal and ultimate vision are to replace the mainframe by new applications using modern and less costly technologies. Stand up to the dinosaur, but keep in mind that legacy migration is a journey! Kai will guide you to the next step of your company’s evolution!
You will learn:
- how to not only reduce operational expenses but provide a path for architecture modernization, agility and eventually mainframe replacement
- what steps some of Confluent’s customers already took, leveraging technologies like Change Data Capture (CDC) or MQ for mainframe offloading
- how an event streaming platform enables cost reduction, architecture modernization, and a combination of a mainframe with new technologies
Building an Active-Active IBM MQ Systemmatthew1001
Shows how message availability and service availability can be configured to reduce downtime and improve overall availability of your MQ network. Demonstrates how Uniform Clusters can be used to help keep your service availability high.
Contains information about the DB2 DSNZPARM that forms the DB2 configuration parameters. All about the different types of zPARMs. A way to update them dynamically.
HA/DR options with SQL Server in Azure and hybridJames Serra
What are all the high availability (HA) and disaster recovery (DR) options for SQL Server in a Azure VM (IaaS)? Which of these options can be used in a hybrid combination (Azure VM and on-prem)? I will cover features such as AlwaysOn AG, Failover cluster, Azure SQL Data Sync, Log Shipping, SQL Server data files in Azure, Mirroring, Azure Site Recovery, and Azure Backup.
Apache Kafka vs. Integration Middleware (MQ, ETL, ESB)Kai Wähner
Learn the differences between an event-driven streaming platform and middleware like MQ, ETL and ESBs – including best practices and anti-patterns, but also how these concepts and tools complement each other in an enterprise architecture.
Extract-Transform-Load (ETL) is still a widely-used pattern to move data between different systems via batch processing. Due to its challenges in today’s world where real time is the new standard, an Enterprise Service Bus (ESB) is used in many enterprises as integration backbone between any kind of microservice, legacy application or cloud service to move data via SOAP / REST Web Services or other technologies. Stream Processing is often added as its own component in the enterprise architecture for correlation of different events to implement contextual rules and stateful analytics. Using all these components introduces challenges and complexities in development and operations.
This session discusses how teams in different industries solve these challenges by building a native streaming platform from the ground up instead of using ETL and ESB tools in their architecture. This allows to build and deploy independent, mission-critical streaming real time application and microservices. The architecture leverages distributed processing and fault-tolerance with fast failover, no-downtime rolling deployments and the ability to reprocess events, so you can recalculate output when your code changes. Integration and Stream Processing are still key functionality but can be realized in real time natively instead of using additional ETL, ESB or Stream Processing tools.
Qualidade de Software em zOS usando IBM Debug Tool e RDzPaulo Batuta
Eu e meu Amigo Claudio fizemos esta apresentação sobre qualidade de software usando IBM Debug Tool e Rdz.Ela foi submetida e aprovada no 2013 World Congress in Computer Science em Las Vegas. O Claúdio foi lá apresentá-la. Foi na Terça passada!
Electrical shop management system project report.pdfKamal Acharya
Electronic Shop Management software helps Electronic showrooms owners and management staff by producing different kind of financial and stock tracking reports, etc. This software is able to manage all electronic stocks. In this software shop owner can manage the data of customer and buyers. Also tax information and other government charges including recycle charges. Electrical shop management system is workable application for retail store inventory and account management. It keeps a list STOCKS and products at a store and can do operation on them. The most important operation on them. The most important operation is a PURCHASE, all the transactions and the billing details and stock purchasing details involved on it
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Debug tool
1. IBM Debug Tool for z/OS, V13.1
Highlights
• Provides a single debugging
tool for batch, TSO, CICS,
DB2,DB2 stored procedures
and IMS applications written in
COBOL,PL/I, C/C++ and
assembler
• Offers more productivity
enhancements when used with
Rational Developer for System
z (available separately)
• Includes tools to quickly
identify and convert OS/VS
COBOL code to ANSI 85
standard
• Supplies tools to help you
determine how thoroughly
your code has been tested
• Workstation GUI interface
In an increasingly complex and
competitive environment with
challenging business demands,
application developers are faced
with constant pressure to deliver
function-rich applications quickly.
Regardless of whether it was
designed to perform routine or
critical tasks, the underlying code
that drives your applications is
highly complex. Application
developers have to work quickly to
meet demand, with minimal errors
— even adapting code on the fly
as your business needs evolve.
To effectively build and service
applications, you need robust,
easy-to-use tools to compile, test
and debug your applications.
IBM® Debug Tool for z/OS®,
V13.1 is an interactive source-level
debugging tool for compiled
applications. It is a program testing
and analysis aid that helps you
examine, monitor, and control the
execution of applications written in
C, C++, COBOL, PL/I or
assembler on a z/OS system.
It provides debugging capability for
applications running in a variety of
environments, such as batch,
TSO, IBM CICS®, IBM IMS™, IBM
DB2®, IBM DB2 stored
procedures and IBM z/OS UNIX®
System Services.
Debug Tool also includes features
to help you identify old OS/VS and
VS COBOL II applications and to
upgrade the source code
automatically to IBM Enterprise
COBOL. It also supplies tools to
help you determine how
thoroughly your code has been
tested.
Debug Tool for z/OS, V13.1
replaces all prior versions of both
IBM Debug Tool for z/OS and IBM
Debug Tool Utilities and Advanced
Functions for z/OS. This single
Debug Tool for z/OS, V13.1
product includes all of the function
in the previous separate products
as well as the new V13.1 function.
Delivering this single
comprehensive product provides
significantly more function to
existing Debug Tool for z/OS
customers, and will help simplify
ordering and installation.
Advanced debugging of composite applications on System z
Figure 1: Debug Tool environment
2. 2
Debug Tool is tightly integrated
with IBM Rational® Developer for
System z® and other tools in the
IBM portfolio of problem
determination tools, so that you
can develop, test and debug
traditional and SOA applications
from the same user interface.
Data at your fingertips
Debug Tool provides an
interactive, full-screen, IBM 3270
system-based terminal interface
with four windows that enable
single-step debugging, dynamic
patching and breakpoints:
• The Monitor window displays the
status of items you select,
variables and registers. You can
view, monitor and alter
application variables or storage
in real-time.
• The Source window displays the
program code, highlighting the
statement being run. In the prefix
area of this window, you can
enter commands to set, display
and remove breakpoints.
• The Log window records and
displays your interactions with
Debug Tool and can show
program output. The information
you see in this window is
included in the log file.
• The Memory window (swappable
with the Log window) helps you
display and scroll through
sections of memory. You can
update memory by typing over
existing data with new data. The
Memory window also keeps track
of addresses for easier
navigation.
Debug Tool gives you great
flexibility to choose how to display
monitored variables and lets you
update large or small variables
directly in the monitor window.
For COBOL character variables
displayed using the automonitor
command, Debug Tool displays
values in character format
regardless of whether the string
contains unprintable characters.
You can change these values by
typing over them in the Monitor
window.
In the automonitor section of the
Monitor window, you can display
the value of a variable in the
variable’s declared data type and
displaying the user register names
in assembler AUTOMONITOR
output, when possible.
Control debugging
environment
With Debug Tool, you can choose
how you view and manage the
process of debugging your
applications. Using the full-screen
interface, you can interactively
debug almost any application as it
runs — including batch
applications.
You can start Debug Tool when an
application starts or during an
abend. Alternatively, you can write
applications so that they start the
tool automatically — at specified
times — interrupting the running of
the application.
Figure 2: Rational Developer for System z works with Debug Tool to help mainframe developers be more productive.
3. 3
Using the setup utility, you can
create a setup file that contains the
program information you need —
including file allocations, runtime
options, program parameters and
application name — to run your
application and start Debug Tool.
Setup files can save you time
when you are debugging a
program that you have to restart
multiple times. You can create
several setup files for each
program. Each setup file can store
information about starting and
running your program under
different circumstances.
IBM Language Environment® user
exits can be linked with the
application or with a private copy
of a Common Execution
Environment (CEE) runtime load
module.
Review source while you
debug
Debug Tool enables you to focus
on a particular problem area by
checking your application for
errors one line at a time. By using
single-step debugging — and
setting dynamic breakpoints —
you can monitor, interrupt and
continue the flow of the application
to identify errors easily.
A basic breakpoint indicates a
stopping point in your program.
For example, you can use a
breakpoint to stop on a particular
line of code. Breakpoints can also
contain instructions, calculations
and application changes. For
example, you can set a breakpoint
to have Debug Tool display the
contents of a variable when the
debugging process reaches a
particular line of code.
You can also use a breakpoint to
patch the flow of the program
dynamically. You can set
breakpoints in an application to
monitor variables for changes, and
watch for specified exceptions and
conditions while an application
runs. You can set, change and
remove breakpoints as you debug
the application. This means that
you don’t have to know where you
want to set a breakpoint before
you start debugging.
In CICS, Debug Tool supports
“pattern matching breakpoints” that
use the program or compile unit
names specified in CADP or
DTCN profiles to start Debug Tool
and provides commands to enable
and disable the breakpoints.
You can also debug applications
written in a mix of COBOL, C, C++
or PL/I languages without leaving
the tool. You can also include
assembler programs in this mix
and, using the disassembly view,
you can debug programs compiled
with the NOTEST compiler option
or applications that include other
languages.
For each programming language
you can use a set of interpreted
commands to specify actions to be
taken. These commands are
subsets of the languages — so
they’re easy to learn, and you can
modify the flow of your application
while you are debugging it. You
can use the commands to
dynamically patch (or alter) the
value of variables and structures
and to control the flow of an
application.
SOA development and
debugging
Debug Tool supports debugging of
monolithic, composite, and SOA
applications. Customers creating
new Web services — whether
newly written or refactored using
existing application assets that use
Rational Developer for System z
— can immediately debug them
using the Debug Tool plug-in
provided.
DESCRIBE CHANNEL and LIST
CONTAINER commands can
display CICS channels and
containers, including containers
that hold state information for Web
services. Users can display the
information, even if it is not being
referenced by the application
program being debugged.
Debug Tool now provides support
for invoking the z/OS XML parser
to parse complete XML 1.0 or 1.1
documents in memory. If the
document is syntactically valid, the
XML is formatted and shown in the
Debug Tool log. Otherwise,
diagnostic information is provided
to help identify the syntax error.
Enhanced debugging
capabilities
Debug Tool provides a rich set of
commands, tools and utilities to
help you to debug your programs.
When used with the setup utility in
Debug Tool, these can help to:
Prepare your high-level language
and programs for debugging by
converting, compiling (or
assembling) and linking your
COBOL, PL/I, C/C++ and
assembler source code.
-Conduct analysis on your test
cases to determine how thoroughly
they validate your programs.
In complex applications, it’s easy
to forget how you reached a
particular point in your program.
Debug Tool commands enable you
to replay statements that have
already run. If you compile your
program with the IBM COBOL for
OS/390® and VM compiler, or the
Enterprise COBOL for z/OS
compiler, you can review the
values of variables and replay the
statements while debugging.
For programs compiled with the
COBOL for OS/390 and VM
compiler, the Enterprise COBOL
for z/OS compiler, or the
Enterprise PL/I for z/OS compiler,
you can automatically monitor the
values of variables referenced at
the current statement. When the
automonitor function is active, any
variables that are referenced by
the current statement are
automatically selected for
4. 4
monitoring. You can view these
variables in the monitor window.
Move to Enterprise COBOL
to reuse and extend
existing code
Previously, to create faster, more
efficient applications, you had to
sacrifice debugging support. With
Debug Tool you can debug
Enterprise COBOL for z/OS
applications that have been
compiled with standard or full-
optimization compiler options.
You can also analyze your load
modules to help you identify
candidate OS/VS COBOL
programs for conversion and then
to convert these OS/VS COBOL
applications to Enterprise COBOL
for z/OS. You can then compile
and debug these applications to
extend the life of your existing
code.
Debug Tool software also provides
coverage tools that enable you to
conduct analysis on your test
cases and determine how
thoroughly they exercise your
programs.
Combine with other
development tools to
optimize applications
Debug Tool shares a number of
side files with IBM Fault Analyzer,
making it easier for you to test and
manage abends in new and
existing applications. For example,
the IDILANGX file produced by
Fault Analyzer can be used by
Debug Tool to debug assembler
programs, and you can create a
readable listing from a Fault
Analyzer side file or a SYSDEBUG
file generated by the COBOL
compiler.
You can also use the Interactive
System Productivity Facility (ISPF)
panels in Debug Tool to invoke
File Manager Base, DB2 or IMS
functions and a user exit enables
you to specify a TEST run-time
option string in the DB2, IMS or
batch environments.
Debug in many
environments
IBM Debug Tool can help you
debug an application while it runs
in a host environment, such as a
batch application, TSO, ISPF,
CICS, IMS or DB2 (including IBM
DB2 stored procedures)
environments. Debug Tool can
help you debug almost any
application and almost any host
language, including COBOL, PL/I,
C/C++ and Assembler applications
running on z/OS systems.
With Debug Tool, you can compile
and link your COBOL, PL/I, C and
C++ programs, and assemble and
link Assembler programs — as
well as pre-process and compile
your CICS and DB2 programs.
IBM Rational Developer for
System z works with Debug Tool,
to give your developers a fully
integrated development, test and
debugging environment for all
applications running on z/OS,
whether traditional, SOA or Web-
based.
A CICS utility transaction (CADP
or DTCN) enables you to control
debugging in the CICS
environment. For example, you
can debug based on a specific
program or transaction name,
while other CICS-specific
capabilities enable you to specify
the span of a debug session or
view — or edit CICS storage and
diagnose storage violations.
Display and alteration of 64-bit
general purpose registers in
assembler expressions is provided
on hardware that supports 64-bit
addressing. Debug Tool correctly
displays data items according to
type, including three floating-point
data types: binary (IEEE), decimal
and hexadecimal.
Figure 3: Formatted XML structure using List Storage command
5. 5
New in V13.1
• A new method for gathering
code coverage is added for the
generation, viewing, and
reporting of code coverage
using the Debug Tool
mainframe interface (MFI) as
the engine. This support is
provided for applications
written in Enterprise COBOL
and Enterprise PLI that are
compiled with the TEST
compile option and its
suboption SEPARATE. This is
enabled via the new EQAOPTS
CCPROGSELECTDSN,
CCOUTPUTDSN and
CCOUTPUTDSNALLOC
commands.
• Debug Tool is enhanced to
provide the automatic start of
IMS message processing
program (MPP) regions and
dynamic routing
of transactions. This allows a
developer to dynamically start
an MPP region, route a
transaction to that MPP region
and at the end of the
transaction shutdown the MPP
region created for the developer
thus reducing system resources.
• To help with the ease of use of
the MFI mode of Debug Tool
for some users, an option is
added that enables breakpoints,
the current line, and the line
with found text to be identified
by a character indicator. This
feature is enabled via a new
EQAOPTS ALTDISP
command.
• Supports Enterprise COBOL
for z/OS, V5.1; Enterprise PL/I
for z/OS, V4.4; CICS TS V5.1;
DB2 V11; IMS V13; z/OS,
V2.1; and C/C++ V2.1.
• Debug Tool now supports JCL
for Batch Debugging in the
DTSP plug-in. This facility is
used to instrument JCL to
initiate a debug session from
the DTSP plug-in.
• Support is added for an IMS
transaction that is associated
with a generic ID. A new
feature is added to the
consolidated
Language Environment user
exit (EQAD3CXT) to search a
new cross-reference table for
the user ID of a user who wants
to debug a IMS transaction that
is started from the web and is
associated with a generic ID.
This enables Debug Tool to
debug these transactions that
use a generic ID. The user ID
from the cross-reference table
is used to find the user's Debug
Tool user exit data set
(userid.DBGTOOL.EQAUOPT
S), which specifies the TEST
runtime parameters and the
display device address. A new
option is added to the Debug
Tool Utilities ISPF panel, "C
IMS Transaction and User ID
Cross Reference Table" to
allow each user to update the
new cross reference table.
• Support is added for tracing
load modules loaded by an
application. New commands
TRACE LOAD and LIST
TRACE LOAD are added
for Debug Tool's MFI mode.
This set of commands allows
the user to get a trace of load
modules loaded by the
application. Start the trace by
issuing TRACE LOAD START.
Use LIST TRACE LOAD to
display the trace. The trace
includes load modules known to
Debug Tool at the time the
TRACE LOAD
START command is entered
and all that are loaded while
the trace is active. End the trace
by issuing TRACE LOAD END.
Note that when the trace is
ended, all trace information is
deleted.
• Support is added for
terminating an idle Debug Tool
session that uses the Terminal
Interface Manager. Debug Tool
supports a time-out option (via
the new
EQAOPTS SESSIONTIMEOU
T command), which allows the
system programmer to establish
a maximum wait time for debug
sessions using a dedicated
terminal or the Terminal
Interface Manager. If the
debug session exceeds the
specified time limit without any
user interaction, the session will
be terminated with either a
QUIT or QUIT DEBUG.
• Enhanced Debug Tool
Coverage Utility 'Create HTML
Targeted Coverage Report'
allows the user to select from a
list of COBOL Program-IDs,
ignore changes to non-
executable code, and produce a
summary of the targeted lines
with selectable HTML links.
• Adds IMS information to start
and stop messages generated by
the EQAOPTS
STARTSTOPMSG command.
• Adds EQAOPTS
STARTSTOPMSGDSN
command and a new Debug
Tool Utilities option 'Non-CICS
Debug Session Start and Stop
Message Viewer' to collect and
view Debug Tool debugger
session start and stop
information.
• Enhanced delay debug mode
with a new EQAOPTS
DLAYDBGCND command to
control CONDITION trapping.
In addition, a new EQAOPTS
DLAYDBGXRF command is
added so that delay debug mode
can use the 'IMS Transaction
and User ID Cross Reference
Table'. Further, NOTEST is
now handled in delay debug
mode.
• A confirmation message is
added to Debug Tool Utilities
option 6 'Debug Tool User Exit
Data Set' to indicate that the
updates have been saved into
the EQAUOPTS data set.
• The ON and AT
OCCURRENCE commands are
enhanced for Enterprise PL/I to
support qualifying data.
• Commands are added, LIST
LDD and CLEAR LDD,
that display and remove LDD
commands known to Debug
Tool, and LIST CC and CC
(START and STOP) that tell the
Debug Tool debugger to gather
and display code coverage data.
• Other customer-requested
capabilities and usability
improvements to enhance the
debugging session.