Frieda has 600 simulations to run that will each take 6 hours. She learns about Condor from colleagues and installs a "personal Condor" on her workstation. This allows Condor to manage her 600 jobs and run them across available resources over time. The document outlines how Frieda organizes her files and directories, writes a submit description file to describe the jobs to Condor, and submits the jobs to her personal Condor pool.
This document provides instructions for installing Condor, a workload management system, on a Windows laptop. It describes downloading the Condor msi installer from the Condor home page, running the installer, accepting the license, and configuring Condor for a local, personal installation that starts jobs immediately. The document demonstrates running Condor and submitting a sample CPU-intensive C program to test that Condor is functioning properly. The output log file shows the jobs being executed on the local host by Condor daemons.
How to write clean & testable code without losing your mindAndreas Czakaj
If you create software that is to be developed continuously over several years you'll need a sustainable approach to code quality.
In our early days of AEM development, however, we used to struggle with code that is rigid, hard to test and full of LOG.debug calls.
In this talk I will share some development best practices we have found that really work in actual AEM based software, e.g. to achieve 100% code coverage and provide high confidence in the code base.
Spoiler alert: no new libraries, frameworks or tools are required - once you know the ideas, plain old TDD and the S.O.L.I.D. principles of Clean Code will do the trick.
by Andreas Czakaj, mensemedia Gesellschaft für Neue Medien mbH
Presented at the adaptTo() 2017 conference in Berlin (https://adapt.to/2017/en/schedule/how-to-write-clean---testable-code-without-losing-your-mind.html).
Presentation video can be found on YouTube (https://www.youtube.com/watch?v=JbJw5oN_zL4)
This document contains a summary of things learned about C# by Phil Denoncourt. It covers 25 items across various topics like threading, exceptions, generics, LINQ, reflection and more. For each item there is a brief description and link to additional resources provided. The document aims to share Phil's learnings and experiences with the C# programming language.
This document provides instructions for interpreting debug output on a Cisco router. The steps have a student configure debugging for IP routing on router R1. Interface Serial 0/0/0 between R1 and R2 is then configured with an IP address. Debug messages indicate the route is added but its state is initially false since the remote side is not yet configured. After fully configuring the local interface, debug output shows the interface state change to down until the remote side is also configured. The steps aim to demonstrate how debug output can provide insight into route states during router configuration.
Condor is a distributed computing software that takes advantage of idle computing resources by running computationally intensive tasks on desktop computers and clusters when they would otherwise be sitting idle. It was developed at the University of Wisconsin-Madison to maximize the utilization of expensive high performance computing machines. Condor allows users to submit jobs to a scheduler that will run the tasks on available desktops or clusters, moving the jobs around as needed to make use of idle resources.
The document discusses using Android and Arduino together to program "things". It describes how the UDOO board allows running Android and communicating with an Arduino-compatible board for building smart devices and interactive things. It provides an overview of developing applications using the Android Accessory Development Kit (ADK) to interface Android with Arduino, covering aspects like setting up the development environment, manifest files, accessing I/O streams, and communicating between the two boards.
BSides MCR 2016: From CSV to CMD to qwertyJerome Smith
This document describes a penetration test where the attacker was able to execute commands on users' machines by exploiting an Excel export vulnerability in a web application. The attacker was able to retrieve NTLM hashes from low-level users and crack the hashes to obtain domain credentials. The attacker then explored ways to improve the attack by bypassing warnings in Excel and identifying alternative commands besides CMD.exe that could be executed without warnings.
The document discusses Node.js and provides instructions for installing Node.js via different methods:
1) Homebrew can be used to install Node.js on OSX by running "brew install node.js".
2) nDistro allows creating and installing Node.js distributions within seconds by specifying module and Node binary version dependencies in a .ndistro file.
3) Node.js can be compiled from source by cloning the Node.js repository via git or downloading the source, running configuration, make, and make install commands.
This document provides instructions for installing Condor, a workload management system, on a Windows laptop. It describes downloading the Condor msi installer from the Condor home page, running the installer, accepting the license, and configuring Condor for a local, personal installation that starts jobs immediately. The document demonstrates running Condor and submitting a sample CPU-intensive C program to test that Condor is functioning properly. The output log file shows the jobs being executed on the local host by Condor daemons.
How to write clean & testable code without losing your mindAndreas Czakaj
If you create software that is to be developed continuously over several years you'll need a sustainable approach to code quality.
In our early days of AEM development, however, we used to struggle with code that is rigid, hard to test and full of LOG.debug calls.
In this talk I will share some development best practices we have found that really work in actual AEM based software, e.g. to achieve 100% code coverage and provide high confidence in the code base.
Spoiler alert: no new libraries, frameworks or tools are required - once you know the ideas, plain old TDD and the S.O.L.I.D. principles of Clean Code will do the trick.
by Andreas Czakaj, mensemedia Gesellschaft für Neue Medien mbH
Presented at the adaptTo() 2017 conference in Berlin (https://adapt.to/2017/en/schedule/how-to-write-clean---testable-code-without-losing-your-mind.html).
Presentation video can be found on YouTube (https://www.youtube.com/watch?v=JbJw5oN_zL4)
This document contains a summary of things learned about C# by Phil Denoncourt. It covers 25 items across various topics like threading, exceptions, generics, LINQ, reflection and more. For each item there is a brief description and link to additional resources provided. The document aims to share Phil's learnings and experiences with the C# programming language.
This document provides instructions for interpreting debug output on a Cisco router. The steps have a student configure debugging for IP routing on router R1. Interface Serial 0/0/0 between R1 and R2 is then configured with an IP address. Debug messages indicate the route is added but its state is initially false since the remote side is not yet configured. After fully configuring the local interface, debug output shows the interface state change to down until the remote side is also configured. The steps aim to demonstrate how debug output can provide insight into route states during router configuration.
Condor is a distributed computing software that takes advantage of idle computing resources by running computationally intensive tasks on desktop computers and clusters when they would otherwise be sitting idle. It was developed at the University of Wisconsin-Madison to maximize the utilization of expensive high performance computing machines. Condor allows users to submit jobs to a scheduler that will run the tasks on available desktops or clusters, moving the jobs around as needed to make use of idle resources.
The document discusses using Android and Arduino together to program "things". It describes how the UDOO board allows running Android and communicating with an Arduino-compatible board for building smart devices and interactive things. It provides an overview of developing applications using the Android Accessory Development Kit (ADK) to interface Android with Arduino, covering aspects like setting up the development environment, manifest files, accessing I/O streams, and communicating between the two boards.
BSides MCR 2016: From CSV to CMD to qwertyJerome Smith
This document describes a penetration test where the attacker was able to execute commands on users' machines by exploiting an Excel export vulnerability in a web application. The attacker was able to retrieve NTLM hashes from low-level users and crack the hashes to obtain domain credentials. The attacker then explored ways to improve the attack by bypassing warnings in Excel and identifying alternative commands besides CMD.exe that could be executed without warnings.
The document discusses Node.js and provides instructions for installing Node.js via different methods:
1) Homebrew can be used to install Node.js on OSX by running "brew install node.js".
2) nDistro allows creating and installing Node.js distributions within seconds by specifying module and Node binary version dependencies in a .ndistro file.
3) Node.js can be compiled from source by cloning the Node.js repository via git or downloading the source, running configuration, make, and make install commands.
by Jeff Duffy, Database Specialist Solution Architect, AWS
Database Week at the AWS Loft is an opportunity to learn about Amazon’s broad and deep family of managed database services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon RDS and Amazon Aurora relational databases, Amazon DynamoDB non-relational databases, Amazon Neptune graph databases, and Amazon ElastiCache managed Redis, along with options for database migration, caching, search and more. You'll will learn how to get started, how to support applications, and how to scale.
Design patterns - Common Solutions to Common Problems - Brad WoodOrtus Solutions, Corp
This document discusses design patterns, which are common solutions to common programming problems. It describes several classic design patterns like Singleton, Strategy, Decorator, and Observer. It also discusses anti-patterns to avoid, like Anemic Domain Model and God Object. Finally, it mentions related principles like the Law of Demeter and Brook's Law.
by Ganesh Shankaran, Sr. Solutions Architect, AWS
Hands-on Lab to compare and contrast relational queries (using RDS for MySQL) with nonrelational queries (using ElastiCache for Redis). You’ll need a laptop with a Firefox or Chrome browser.
A Check of the Open-Source Project WinSCP Developed in Embarcadero C++ BuilderAndrey Karpov
We regularly check open-source C/C++ projects, but what we check are mostly projects developed in the Visual Studio IDE. For some reason, we haven't paid much attention to the Embarcadero C++ Builder IDE. In order to improve this situation, we are going to discuss the WinSCP project I have checked recently.
P.S. C++ Builder support in PVS-Studio had been dropped after version 5.20. If you have any questions, feel free to contact our support.
Hands-on Lab to compare and contrast relational queries (using RDS for MySQL) with non-relational queries (using ElastiCache for Redis). You’ll need a laptop with a Firefox or Chrome browser.
Codetainer: a Docker-based browser code 'sandbox'Jen Andre
Codetainer is a browser-based sandbox for running Docker containers. It allows users to "try 'X' in your browser" for any X by running Docker containers in an isolated and programmable manner directly in the browser. Codetainer uses Docker APIs to launch and manage lightweight containers via a Go-based API server. Users can create and register Docker images, launch "codetainers" from those images, and interact with the codetainers through the browser via websockets, viewing terminals and sending keystrokes. Codetainer aims to provide a secure and flexible environment for use cases like tutorials, training, and remote management while addressing challenges around container introspection and security.
Skiron - Experiments in CPU Design in DMithun Hunsur
This document discusses Skiron, an experimental CPU design project implemented in the D programming language. It provides an overview of Skiron, which simulates a RISC-inspired instruction set architecture. It describes the idioms and patterns used in D to define the instruction set and encoding in a way that is self-documenting and allows different parts of the software to stay in sync. It also discusses lessons learned, such as issues with delegates, as well as potential improvements to D's metaprogramming capabilities and standard library support for @nogc code. Realizing Skiron in hardware with an FPGA and making it self-hosting are presented as future goals.
This document provides an overview of using Puppet to manage Windows configurations. It discusses the Puppet Resource Abstraction Layer (RAL) and Windows-specific resources. It also covers modules, profiles, roles, Hiera for data separation, and some examples including configuring domain membership, BGInfo, antivirus software, logon messages, local administrators, Windows Firewall, filesystem ACLs, time configuration, and monitoring agents. The document concludes with an example role configuration and encourages attendees to try out the example code.
PuppetConf 2016: Puppet on Windows – Nicolas Corrarello, PuppetPuppet
Here are the slides from Nicolas Corrarello's PuppetConf 2016 presentation called Puppet on Windows. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
FIWARE Real-time Processing of Historic Context Information using Apache Flin...sonsoleslp
Presentation on the FIWARE Cosmos Generic Enabler: Performing Streaming Analytics using Flink and the Orion-Flink Connector.
Presented at the FIWARE Global Summit Málaga 2018 by the UPM-ETSIT Team
FIWARE Global Summit - Real-time Processing of Historic Context Information u...FIWARE
Presentation by the UPM Team
Sonsoles López, Andres Muñoz, Joaquin Salvachua and Gabriel Huecas
FIWARE Global Summit
27-28 November 2018
Malaga, Spain
The document provides an agenda for an embedded C programming lecture that includes the following topics: definitions of embedded systems and the differences between C for embedded systems and embedded C, the code compilation process and types of errors, code compilation using the command line, and a quick revision of C language syntax. It concludes with assigning a task for students.
Docker … Podman are two close but different tools. What are their differences, what are their commonalities? In this presentation, we propose to present the two tools in order to highlight their differences in design and their specificities, their similarities.
The objective is to allow you to know these tools, from their common roots (Cgroup, namespace,...) to their divergence (socket). From ease of use (Socket) to the hassle (proxy), we will address the strengths and weaknesses of each through our uses of them (build, test,...). We will of course mention our friends the CVEs to feed your thoughts on their security.
* What are Embedded Systems?
* C for Embedded Systems vs. Embedded C.
* Code Compilation process.
* Error types.
* Code Compilation using command line.
The document provides an overview of a presentation on kernel auditing research, including:
- Three parts to the presentation covering kernel auditing research, exploitable bugs found, and kernel exploitation.
- Audits were conducted on several open source kernels, finding over 100 vulnerabilities across them.
- A sample of exploitable bugs is then presented from the audited kernels to provide evidence that kernels are not bug-free and vulnerabilities can be relatively simple to find and exploit.
How We Analyzed 1000 Dumps in One Day - Dina Goldshtein, Brightsource - DevOp...DevOpsDays Tel Aviv
The document describes BrightSource Energy's process for analyzing crash dumps from their solar power plant control software. Originally, crashes were analyzed manually using debuggers like Visual Studio, which could take 10 minutes per dump and there were often dozens of dumps per day. They developed an automatic analysis workflow using the ClrMD NuGet package to analyze dumps. The script uses ClrMD to find the exception, call stack, and faulty component in each dump. It then alerts the relevant owner and creates a ticket in Redmine. This reduced analysis time from hours to seconds and allowed them to analyze around 1000 dumps in a single day.
by Jeff Duffy, Database Specialist Solution Architect, AWS
Database Week at the AWS Loft is an opportunity to learn about Amazon’s broad and deep family of managed database services. These services provide easy, scalable, reliable, and cost-effective ways to manage your data in the cloud. We explain the fundamentals and take a technical deep dive into Amazon RDS and Amazon Aurora relational databases, Amazon DynamoDB non-relational databases, Amazon Neptune graph databases, and Amazon ElastiCache managed Redis, along with options for database migration, caching, search and more. You'll will learn how to get started, how to support applications, and how to scale.
Design patterns - Common Solutions to Common Problems - Brad WoodOrtus Solutions, Corp
This document discusses design patterns, which are common solutions to common programming problems. It describes several classic design patterns like Singleton, Strategy, Decorator, and Observer. It also discusses anti-patterns to avoid, like Anemic Domain Model and God Object. Finally, it mentions related principles like the Law of Demeter and Brook's Law.
by Ganesh Shankaran, Sr. Solutions Architect, AWS
Hands-on Lab to compare and contrast relational queries (using RDS for MySQL) with nonrelational queries (using ElastiCache for Redis). You’ll need a laptop with a Firefox or Chrome browser.
A Check of the Open-Source Project WinSCP Developed in Embarcadero C++ BuilderAndrey Karpov
We regularly check open-source C/C++ projects, but what we check are mostly projects developed in the Visual Studio IDE. For some reason, we haven't paid much attention to the Embarcadero C++ Builder IDE. In order to improve this situation, we are going to discuss the WinSCP project I have checked recently.
P.S. C++ Builder support in PVS-Studio had been dropped after version 5.20. If you have any questions, feel free to contact our support.
Hands-on Lab to compare and contrast relational queries (using RDS for MySQL) with non-relational queries (using ElastiCache for Redis). You’ll need a laptop with a Firefox or Chrome browser.
Codetainer: a Docker-based browser code 'sandbox'Jen Andre
Codetainer is a browser-based sandbox for running Docker containers. It allows users to "try 'X' in your browser" for any X by running Docker containers in an isolated and programmable manner directly in the browser. Codetainer uses Docker APIs to launch and manage lightweight containers via a Go-based API server. Users can create and register Docker images, launch "codetainers" from those images, and interact with the codetainers through the browser via websockets, viewing terminals and sending keystrokes. Codetainer aims to provide a secure and flexible environment for use cases like tutorials, training, and remote management while addressing challenges around container introspection and security.
Skiron - Experiments in CPU Design in DMithun Hunsur
This document discusses Skiron, an experimental CPU design project implemented in the D programming language. It provides an overview of Skiron, which simulates a RISC-inspired instruction set architecture. It describes the idioms and patterns used in D to define the instruction set and encoding in a way that is self-documenting and allows different parts of the software to stay in sync. It also discusses lessons learned, such as issues with delegates, as well as potential improvements to D's metaprogramming capabilities and standard library support for @nogc code. Realizing Skiron in hardware with an FPGA and making it self-hosting are presented as future goals.
This document provides an overview of using Puppet to manage Windows configurations. It discusses the Puppet Resource Abstraction Layer (RAL) and Windows-specific resources. It also covers modules, profiles, roles, Hiera for data separation, and some examples including configuring domain membership, BGInfo, antivirus software, logon messages, local administrators, Windows Firewall, filesystem ACLs, time configuration, and monitoring agents. The document concludes with an example role configuration and encourages attendees to try out the example code.
PuppetConf 2016: Puppet on Windows – Nicolas Corrarello, PuppetPuppet
Here are the slides from Nicolas Corrarello's PuppetConf 2016 presentation called Puppet on Windows. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
FIWARE Real-time Processing of Historic Context Information using Apache Flin...sonsoleslp
Presentation on the FIWARE Cosmos Generic Enabler: Performing Streaming Analytics using Flink and the Orion-Flink Connector.
Presented at the FIWARE Global Summit Málaga 2018 by the UPM-ETSIT Team
FIWARE Global Summit - Real-time Processing of Historic Context Information u...FIWARE
Presentation by the UPM Team
Sonsoles López, Andres Muñoz, Joaquin Salvachua and Gabriel Huecas
FIWARE Global Summit
27-28 November 2018
Malaga, Spain
The document provides an agenda for an embedded C programming lecture that includes the following topics: definitions of embedded systems and the differences between C for embedded systems and embedded C, the code compilation process and types of errors, code compilation using the command line, and a quick revision of C language syntax. It concludes with assigning a task for students.
Docker … Podman are two close but different tools. What are their differences, what are their commonalities? In this presentation, we propose to present the two tools in order to highlight their differences in design and their specificities, their similarities.
The objective is to allow you to know these tools, from their common roots (Cgroup, namespace,...) to their divergence (socket). From ease of use (Socket) to the hassle (proxy), we will address the strengths and weaknesses of each through our uses of them (build, test,...). We will of course mention our friends the CVEs to feed your thoughts on their security.
* What are Embedded Systems?
* C for Embedded Systems vs. Embedded C.
* Code Compilation process.
* Error types.
* Code Compilation using command line.
The document provides an overview of a presentation on kernel auditing research, including:
- Three parts to the presentation covering kernel auditing research, exploitable bugs found, and kernel exploitation.
- Audits were conducted on several open source kernels, finding over 100 vulnerabilities across them.
- A sample of exploitable bugs is then presented from the audited kernels to provide evidence that kernels are not bug-free and vulnerabilities can be relatively simple to find and exploit.
How We Analyzed 1000 Dumps in One Day - Dina Goldshtein, Brightsource - DevOp...DevOpsDays Tel Aviv
The document describes BrightSource Energy's process for analyzing crash dumps from their solar power plant control software. Originally, crashes were analyzed manually using debuggers like Visual Studio, which could take 10 minutes per dump and there were often dozens of dumps per day. They developed an automatic analysis workflow using the ClrMD NuGet package to analyze dumps. The script uses ClrMD to find the exception, call stack, and faulty component in each dump. It then alerts the relevant owner and creates a ticket in Redmine. This reduced analysis time from hours to seconds and allowed them to analyze around 1000 dumps in a single day.
How to Manage Reception Report in Odoo 17Celine George
A business may deal with both sales and purchases occasionally. They buy things from vendors and then sell them to their customers. Such dealings can be confusing at times. Because multiple clients may inquire about the same product at the same time, after purchasing those products, customers must be assigned to them. Odoo has a tool called Reception Report that can be used to complete this assignment. By enabling this, a reception report comes automatically after confirming a receipt, from which we can assign products to orders.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
A Visual Guide to 1 Samuel | A Tale of Two HeartsSteve Thomason
These slides walk through the story of 1 Samuel. Samuel is the last judge of Israel. The people reject God and want a king. Saul is anointed as the first king, but he is not a good king. David, the shepherd boy is anointed and Saul is envious of him. David shows honor while Saul continues to self destruct.
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
How Barcodes Can Be Leveraged Within Odoo 17Celine George
In this presentation, we will explore how barcodes can be leveraged within Odoo 17 to streamline our manufacturing processes. We will cover the configuration steps, how to utilize barcodes in different manufacturing scenarios, and the overall benefits of implementing this technology.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
1. Condor Project
Computer Sciences Department
University of Wisconsin-Madison
condor-admin@cs.wisc.edu
http://www.cs.wisc.edu/condor
Using Condor
An Introduction
Condor
2. 2
http://www.cs.wisc.edu/condor
Tutorial Outline
›The story of Frieda, the scientist
›Using Condor to manage jobs
›Using Condor to manage resources
›Condor architecture and mechanisms
›Condor on the grid
Flocking
Condor and other grid technologies
›Stop me if you have any questions!
4. 4
http://www.cs.wisc.edu/condor
Frieda’s Application …
Run a Parameter Sweep of F(x,y,z) for 20
values of x, 10 values of y and 3 values of z
20×10×3 = 600 combinations
F takes on the average 6 hours to compute on a
“typical” workstation (total = 600 × 6 = 3600 hours)
F requires a “moderate” (256MB) amount of
memory
F performs “moderate” I/O - (x,y,z) is 5 MB and
F(x,y,z) is 50 MB
7. 7
http://www.cs.wisc.edu/condor
Getting Condor
› Available as a free download from
http://www.cs.wisc.edu/condor
› Download Condor for your operating
system
Available for most UNIX (including Linux
and Apple’s OS/X) platforms
Also for Windows NT / XP
8. 8
http://www.cs.wisc.edu/condor
Condor Releases
› Stable / Developer Releases
Version numbering scheme similar to
that of the (pre 2.6) Linux kernels …
Major.minor.release
• Minor is even (a.b.c): Stable
– Examples: 6.6.3, 6.8.4, 6.8.5
– Very stable, mostly bug fixes
• Minor is odd (a.b.c): Developer
– New features, may have some bugs
– Examples: 6.7.11, 6.9.1, 6.9.2
9. 9
http://www.cs.wisc.edu/condor
Frieda Installs a “Personal
Condor” on her machine…
› What do we mean by a “Personal” Condor?
Condor on your own workstation
No root / administrator access required
No system administrator intervention needed
› After installation, Frieda submits her jobs
to her Personal Condor…
12. 12
http://www.cs.wisc.edu/condor
Your Personal Condor will ...
› Keep an eye on your jobs and will keep you
posted on their progress
› Implement your policy on the execution
order of the jobs
› Keep a log of your job activities
› Add fault tolerance to your jobs
› Implement your policy on when the jobs can
run on your workstation
15. 15
http://www.cs.wisc.edu/condor
Machine
Machines state their requirements and
preferences:
Run jobs only when there is no keyboard
activity
I prefer to run Frieda’s jobs
I am a machine in the physics department
Never run jobs belonging to Dr. Smith
16. 16
http://www.cs.wisc.edu/condor
The Magic of Matchmaking
› Jobs and machines state their
requirements and preferences
› Condor matches jobs with machines
based on requirements and preferences
19. 19
http://www.cs.wisc.edu/condor
Using the Vanilla Universe
• The Vanilla Universe:
– Allows running almost
any “serial” job
– Provides automatic file
transfer, etc.
– Like vanilla ice cream
• Can be used in just
about any situation
21. 21
http://www.cs.wisc.edu/condor
Make your job batch-ready
(continued)…
Job can still use STDIN, STDOUT, and
STDERR (the keyboard and the screen),
but files are used for these instead of
the actual devices
Similar to UNIX shell:
• $ ./myprogram <input.txt >output.txt
22. 22
http://www.cs.wisc.edu/condor
3. Create a Submit
Description File
› A plain ASCII text file
› Condor does not care about file extensions
› Tells Condor about your job:
Which executable, universe, input, output and error
files to use, command-line arguments, environment
variables, any special requirements or preferences
(more on this later)
› Can describe many jobs at once (a “cluster”),
each with different input, arguments, output,
etc.
23. 23
http://www.cs.wisc.edu/condor
Simple Submit Description
File
# Simple condor_submit input file
# (Lines beginning with # are comments)
# NOTE: the words on the left side are not
# case sensitive, but filenames are!
Universe = vanilla
Executable = my_job
Output = output.txt
Queue
24. 24
http://www.cs.wisc.edu/condor
4. Run condor_submit
› You give condor_submit the name of
the submit file you have created:
condor_submit my_job.submit
› condor_submit:
Parses the submit file, checks for errors
Creates a “ClassAd” that describes your
job(s)
Puts job(s) in the Job Queue
25. 25
http://www.cs.wisc.edu/condor
ClassAd ?
› Condor’s internal data representation
Similar to classified ads (as the name
implies)
Represent an object & its attributes
• Usually many attributes
Can also describe what an object
matches with
27. 27
http://www.cs.wisc.edu/condor
Example Submit Description
File With Logging
# Example condor_submit input file
# (Lines beginning with # are comments)
# NOTE: the words on the left side are not
# case sensitive, but filenames are!
Universe = vanilla
Executable = /home/frieda/condor/my_job.condor
Log = my_job.log ·Job log (from Condor)
Input = my_job.in ·Program’s standard input
Output = my_job.out ·Program’s standard output
Error = my_job.err ·Program’s standard error
Arguments = -a1 -a2 ·Command line arguments
InitialDir = /home/frieda/condor/run
Queue
28. 28
http://www.cs.wisc.edu/condor
“Clusters” and “Processes”
› If your submit file describes multiple jobs, we call
this a “cluster”
› Each cluster has a unique “cluster number”
› Each job in a cluster is called a “process”
Process numbers always start at zero
› A Condor “Job ID” is the cluster number, a period,
and the process number (i.e. 2.1)
A cluster can have a single process
• Job ID = 20.0 ·Cluster 20, process 0
Or, a cluster can have more than one process
• Job ID: 21.0, 21.1, 21.2 ·Cluster 21, process 0, 1, 2
29. 29
http://www.cs.wisc.edu/condor
Submit File for a Cluster
# Example submit file for a cluster of 2 jobs
# with separate input, output, error and log files
Universe = vanilla
Executable = my_job
Arguments = -x 0
log = my_job_0.log
Input = my_job_0.in
Output = my_job_0.out
Error = my_job_0.err
Queue ·Job 2.0 (cluster 2, process 0)
Arguments = -x 1
log = my_job_1.log
Input = my_job_1.in
Output = my_job_1.out
Error = my_job_1.err
Queue ·Job 2.1 (cluster 2, process 1)
30. 30
http://www.cs.wisc.edu/condor
% condor_submit my_job.submit-file
Submitting job(s).
2 job(s) submitted to cluster 2.
% condor_q
-- Submitter: perdita.cs.wisc.edu : <128.105.165.34:1027> :
ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD
1.0 frieda 4/15 06:52 0+00:02:11 R 0 0.0 my_job –a1 –a2
2.0 frieda 4/15 06:56 0+00:00:00 I 0 0.0 my_job –x 0
2.1 frieda 4/15 06:56 0+00:00:00 I 0 0.0 my_job –x 1
3 jobs; 2 idle, 1 running, 0 held
%
Submitting The Job
31. 31
http://www.cs.wisc.edu/condor
Back to our 600 jobs…
› We could put all input, output, error &
log files in the one directory
One of each type for each job
That’d be 2400 files (4 files × 600 jobs)
Difficult to sort through
› Better: Create a subdirectory for
each run
32. 32
http://www.cs.wisc.edu/condor
Organize your files and
directories for big runs
› Create subdirectories for each “run”
run_0, run_1, … run_599
› Create input files in each of these
run_0/simulation.in
run_1/simulation.in
…
run_599/simulation.in
› The output, error & log files for each job
will be created by Condor from your job’s
output
34. 34
http://www.cs.wisc.edu/condor
Submit Description File for
600 Jobs
# Cluster of 600 jobs with different directories
Universe = vanilla
Executable = sim
Log = simulation.log
...
Arguments = -x 0
InitialDir = run_0 ·Log, input, output & error files -> run_0
Queue ·Job 3.0 (Cluster 3, Process 0)
Arguments = -x 1
InitialDir = run_1 ·Log, input, output & error files -> run_1
Queue ·Job 3.1 (Cluster 3, Process 1)
·Do this 598 more times…………
35. 35
http://www.cs.wisc.edu/condor
Submit File for a Big Cluster
of Jobs
› We just submitted 1 cluster with 600
processes
› All the input/output files will be in
different directories
› The submit file is pretty unwieldy (over
1200 lines)
› Isn’t there a better way?
36. 36
http://www.cs.wisc.edu/condor
Submit File for a Big Cluster
of Jobs (the better way) #1
› We can queue all 600 in 1 “Queue”
command
Queue 600
› Condor provides $(Process) and
$(Cluster)
$(Process) will be expanded to the
process number for each job in the cluster
• 0, 1, … 599
$(Cluster) will be expanded to the
cluster number
• Will be 4 for all jobs in this cluster
37. 37
http://www.cs.wisc.edu/condor
Submit File for a Big Cluster
of Jobs (the better way) #2
› The initial directory for each job can
be specified using $(Process)
InitialDir = run_$(Process)
Condor will expand these to “run_0”,
“run_1”, … “run_599” directories
› Similarly, arguments can be variable
Arguments = -x $(Process)
Condor will expand these to “-x 0”,
“-x 1”, … “-x 599”
38. 38
http://www.cs.wisc.edu/condor
Better Submit File for 600
Jobs
# Example condor_submit input file that defines
# a cluster of 600 jobs with different directories
Universe = vanilla
Executable = my_job
Log = my_job.log
Input = my_job.in
Output = my_job.out
Error = my_job.err
Arguments = –x $(Process) ·–x 0, -x 1, … -x 599
InitialDir = run_$(Process) ·run_0 … run_599
Queue 600 ·Jobs 4.0 … 4.599
40. 40
http://www.cs.wisc.edu/condor
And, Check the queue
$ condor_q
-- Submitter: x.cs.wisc.edu : <128.105.121.53:510> : x.cs.wisc.edu
ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD
4.0 frieda 4/20 12:08 0+00:00:05 R 0 9.8 my_job -arg1 –x 0
4.1 frieda 4/20 12:08 0+00:00:03 I 0 9.8 my_job -arg1 –x 1
4.2 frieda 4/20 12:08 0+00:00:01 I 0 9.8 my_job -arg1 –x 2
4.3 frieda 4/20 12:08 0+00:00:00 I 0 9.8 my_job -arg1 –x 3
...
4.598 frieda 4/20 12:08 0+00:00:00 I 0 9.8 my_job -arg1 –x 598
4.599 frieda 4/20 12:08 0+00:00:00 I 0 9.8 my_job -arg1 –x 599
600 jobs; 599 idle, 1 running, 0 held
41. 41
http://www.cs.wisc.edu/condor
Removing jobs
› If you want to remove a job from the
Condor queue, you use condor_rm
› You can only remove jobs that you
own
› Privileged user can remove any jobs
“root” on UNIX
“administrator” on Windows
42. 42
http://www.cs.wisc.edu/condor
Removing jobs (continued)
› Remove an entire cluster:
condor_rm 4 ·Removes the whole cluster
› Remove a specific job from a cluster:
condor_rm 4.0 ·Removes a single job
› Or, remove all of your jobs with “-a”
condor_rm -a ·Removes all jobs / clusters
45. 45
http://www.cs.wisc.edu/condor
Access to Data in Condor
› Use shared filesystem if available
› No shared filesystem?
Condor can transfer files
• Can automatically send back changed files
• transfer of multiple files
• Can be encrypted over the wire
Remote I/O Socket
Standard Universe can use remote
system calls (more on this later)
46. 46
http://www.cs.wisc.edu/condor
Condor File Transfer
› ShouldTransferFiles = YES
Always transfer files to execution site
› ShouldTransferFiles = NO
Rely on a shared filesystem
› ShouldTransferFiles = IF_NEEDED
Will automatically transfer the files if the submit and
execute machine are not in the same FileSystemDomain
Universe = vanilla
Executable = my_job
Log = my_job.log
ShouldTransferFiles = IF_NEEDED
Transfer_input_files = dataset.$(Process), common.data
Transfer_output_files = TheAnswer.dat
Queue 600
47. 47
http://www.cs.wisc.edu/condor
Specify Requirements
› An expression (syntax similar to C or Java)
› Must evaluate to True for a match to be
made
Universe = vanilla
Executable = my_job
Log = my_job.log
InitialDir = run_$(Process)
Requirements = Memory >= 256 && Disk > 10000
Queue 600
48. 48
http://www.cs.wisc.edu/condor
Advanced Requirements
› Requirements can match custom attributes in your
Machine Ad
Can be added by hand to each machine
Or, automatically using the “Hawkeye” mechanism
Universe = vanilla
Executable = my_job
Log = my_job.log
InitialDir = run_$(Process)
Requirements = Memory >= 256 && Disk > 10000
&& ( HaveProg =!= UNDEFINED && HaveProg) )
Queue 600
49. 49
http://www.cs.wisc.edu/condor
And, Specify Rank
› All matches which meet the requirements
can be sorted by preference with a Rank
expression.
› Higher the Rank, the better the match
Universe = vanilla
Executable = my_job
Log = my_job.log
Arguments = -arg1 –arg2
InitialDir = run_$(Process)
Requirements = Memory >= 256 && Disk > 10000
Rank = (KFLOPS*10000) + Memory
Queue 600
50. 50
http://www.cs.wisc.edu/condor
Check the queue
› Check the queue with condor_q:
bash-2.05a$ condor_q
-- Submitter: x.cs.wisc.edu : <128.105.121.53:510> :x.cs.wisc.edu
ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD
5.0 frieda 4/20 12:23 0+00:00:00 I 0 9.8 my_job -arg1 –n 0
5.1 frieda 4/20 12:23 0+00:00:00 I 0 9.8 my_job -arg1 –n 1
5.2 frieda 4/20 12:23 0+00:00:00 I 0 9.8 my_job -arg1 –n 2
5.3 frieda 4/20 12:23 0+00:00:00 I 0 9.8 my_job -arg1 –n 3
5.4 frieda 4/20 12:23 0+00:00:00 I 0 9.8 my_job -arg1 –n 4
5.5 frieda 4/20 12:23 0+00:00:00 I 0 9.8 my_job -arg1 –n 5
5.6 frieda 4/20 12:23 0+00:00:00 I 0 9.8 my_job -arg1 –n 6
5.7 frieda 4/20 12:23 0+00:00:00 I 0 9.8 my_job -arg1 –n 7
6.0 frieda 4/20 13:22 0+00:00:00 H 0 9.8 my_job -arg1 –arg2
8 jobs; 8 idle, 0 running, 1 held
51. 51
http://www.cs.wisc.edu/condor
Look at jobs on hold
% condor_q –hold
-- Submiter: x.cs.wisc.edu : <128.105.121.53:510>
:x.cs.wisc.edu
ID OWNER HELD_SINCE HOLD_REASON
6.0 frieda 4/20 13:23 Error from starter
on vm1@skywalker.cs.wisc
9 jobs; 8 idle, 0 running, 1 held
Or, See full details for a job
% condor_q –l 6.0
52. 52
http://www.cs.wisc.edu/condor
Check machine status
› Verify that there are idle machines with condor_status:
bash-2.05a$ condor_status
Name OpSys Arch State Activity LoadAv Mem ActvtyTime
vm1@tonic.c LINUX INTEL Claimed Busy 0.000 501 0+00:00:20
vm2@tonic.c LINUX INTEL Claimed Busy 0.000 501 0+00:00:19
vm3@tonic.c LINUX INTEL Claimed Busy 0.040 501 0+00:00:17
vm4@tonic.c LINUX INTEL Claimed Busy 0.000 501 0+00:00:05
Total Owner Claimed Unclaimed Matched Preempting
INTEL/LINUX 4 0 4 0 0 0
Total 4 0 4 0 0 0
53. 53
http://www.cs.wisc.edu/condor
Look in Job Log
› Look in your job log for clues:
bash-2.05a$ cat my_job.log
000 (031.000.000) 04/20 14:47:31 Job submitted from host:
<128.105.121.53:48740>
...
007 (031.000.000) 04/20 15:02:00 Shadow exception!
Error from starter on gig06.stat.wisc.edu: Failed
to open '/scratch.1/frieda/workspace/v67/condor-
test/test3/run_0/my_job.in' as standard input: No such
file or directory (errno 2)
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
...
55. 55
http://www.cs.wisc.edu/condor
Look to condor_q for help:
condor_q -analyze
bash-2.05a$ condor_q -ana 29
---
029.000: Run analysis summary. Of 1243 machines,
1243 are rejected by your job's requirements
0 are available to run your job
WARNING: Be advised:
No resources matched request's constraints
Check the Requirements expression below:
Requirements = ((Memory > 8192)) && (Arch == "INTEL") &&
(OpSys == "LINUX") && (Disk >= DiskUsage) &&
(TARGET.FileSystemDomain == MY.FileSystemDomain)
57. 57
http://www.cs.wisc.edu/condor
Learn about available
resources:
bash-2.05a$ condor_status –const 'Memory > 8192'
(no output means no matches)
bash-2.05a$ condor_status -const 'Memory > 4096'
Name OpSys Arch State Activ LoadAv Mem ActvtyTime
vm1@s0-03.cs. LINUX X86_64 Unclaimed Idle 0.000 5980 1+05:35:05
vm2@s0-03.cs. LINUX X86_64 Unclaimed Idle 0.000 5980 13+05:37:03
vm1@s0-04.cs. LINUX X86_64 Unclaimed Idle 0.000 7988 1+06:00:05
vm2@s0-04.cs. LINUX X86_64 Unclaimed Idle 0.000 7988 13+06:03:47
Total Owner Claimed Unclaimed Matched Preempting
X86_64/LINUX 4 0 0 4 0 0
Total 4 0 0 4 0 0
58. 58
http://www.cs.wisc.edu/condor
Job Policy Expressions
› User can supply job policy expressions in
the submit file.
› Can be used to describe a successful run.
on_exit_remove = <expression>
on_exit_hold = <expression>
periodic_remove = <expression>
periodic_hold = <expression>
59. 59
http://www.cs.wisc.edu/condor
Job Policy Examples
› Do not remove if exits with a signal:
on_exit_remove = ExitBySignal == False
› Place on hold if exits with nonzero status
or ran for less than an hour:
on_exit_hold = ( (ExitBySignal==False)
&& (ExitSignal != 0) ) || (
(ServerStartTime - JobStartDate) < 3600)
› Place on hold if job has spent more than
50% of its time suspended:
periodic_hold = CumulativeSuspensionTime >
(RemoteWallClockTime / 2.0)
61. 61
http://www.cs.wisc.edu/condor
My new jobs run for 20
days…
› What happens when a job is
forced off it’s CPU?
Preempted by higher priority
user or job
Vacated because of user activity
› How can I add fault tolerance
to my jobs?
63. 63
http://www.cs.wisc.edu/condor
Remote System Calls in
the Standard Universe
› I/O system calls are trapped and sent back
to the submit machine
Examples: open a file, write to a file
› No source code changes typically required
› Programming language independent
64. 64
http://www.cs.wisc.edu/condor
Process Checkpointing in the
Standard Universe
› Condor’s process checkpointing provides a
mechanism to automatically save the
state of a job
› The process can then be restarted from
right where it was checkpointed
After preemption, crash, etc.
69. 69
http://www.cs.wisc.edu/condor
When will Condor
checkpoint your job?
› Periodically, if desired
For fault tolerance
› When your job is preempted by a higher
priority job
› When your job is vacated because the
execution machine becomes busy
› When you explicitly run condor_checkpoint,
condor_vacate, condor_off or
condor_restart command
70. 70
http://www.cs.wisc.edu/condor
Making the Standard
Universe Work
› The job must be relinked with Condor’s
standard universe support library
› To relink, place condor_compile in front of
the command used to link the job:
% condor_compile gcc -o myjob myjob.c
- OR -
% condor_compile f77 -o myjob filea.f fileb.f
- OR -
% condor_compile make –f MyMakefile
71. 71
http://www.cs.wisc.edu/condor
Limitations of the
Standard Universe
› Condor’s checkpointing is not at the kernel
level.
Standard Universe the job may not:
• Fork()
• Use kernel threads
• Use some forms of IPC, such as pipes and shared
memory
› Must have access to source code to relink
› Many typical scientific jobs are OK
74. 74
http://www.cs.wisc.edu/condor
Frieda learns DAGMan
› Directed Acyclic Graph Manager
› DAGMan allows you to specify the
dependencies between your Condor jobs, so
it can manage them automatically for you.
› (e.g., “Don’t run job “B” until job “A” has
completed successfully.”)
75. 75
http://www.cs.wisc.edu/condor
What is a DAG?
› A DAG is the data structure
used by DAGMan to represent
these dependencies.
› Each job is a “node” in the
DAG.
› Each node can have any
number of “parent” or
“children” nodes – as long as
there are no loops!
Job
A
Job
B
Job
C
Job
D
76. 76
http://www.cs.wisc.edu/condor
Defining a DAG
› A DAG is defined by a .dag file, listing each of its
nodes and their dependencies:
# diamond.dag
Job A a.sub
Job B b.sub
Job C c.sub
Job D d.sub
Parent A Child B C
Parent B C Child D
› each node will run the Condor job specified by its
accompanying Condor submit file
Job A
Job B Job C
Job D
77. 77
http://www.cs.wisc.edu/condor
Submitting a DAG
› To start your DAG, just run
condor_submit_dag with your .dag file,
and Condor will start a personal DAGMan
daemon which to begin running your jobs:
% condor_submit_dag diamond.dag
› condor_submit_dag is run by the schedd
DAGMan daemon itself is “watched” by Condor,
so you don’t have to
80. 80
http://www.cs.wisc.edu/condor
Running a DAG (cont’d)
› In case of a job failure, DAGMan continues until it
can no longer make progress, and then creates a
“rescue” file with the current state of the DAG.
Condor
Job
Queue
DAGMan
X
D
A
B
Rescue
File
81. 81
http://www.cs.wisc.edu/condor
Recovering a DAG
› Once the failed job is ready to be re-run,
the rescue file can be used to restore the
prior state of the DAG.
Condor
Job
Queue
Rescue
File
C
DAGMan D
A
B C
85. 85
http://www.cs.wisc.edu/condor
General User Commands
› condor_status View Pool Status
› condor_q View Job Queue
› condor_submit Submit new Jobs
› condor_rm Remove Jobs
› condor_prio Intra-User Prios
› condor_history Completed Job Info
› condor_submit_dag Submit new DAG
› condor_checkpoint Force a checkpoint
› condor_compile Link Condor library
86. 86
http://www.cs.wisc.edu/condor
Condor Job Universes
• Serial Jobs
• Vanilla Universe
• Standard
Universe
• Grid Universe
• Scheduler
• Local Universe
• Java Universe
• Parallel Jobs
• MPI Universe
• PVM Universe
• Parallel Universe
87. 87
http://www.cs.wisc.edu/condor
Why have a special
Universe for Java jobs?
› Java Universe provides more than just
inserting “java” at the start of the execute
line of a vanilla job:
Knows which machines have a JVM installed
Knows the location, version, and performance of
JVM on each machine
Knows about jar files, etc.
Provides more information about Java job
completion than just JVM exit code
• Program runs in a Java wrapper, allowing Condor to
report Java exceptions, etc.
89. 89
http://www.cs.wisc.edu/condor
Java support, cont.
bash-2.05a$ condor_status –java
Name JavaVendor Ver State Actv LoadAv Mem
abulafia.cs Sun Microsy 1.5.0_ Claimed Busy 0.180 503
acme.cs.wis Sun Microsy 1.5.0_ Unclaimed Idle 0.000 503
adelie01.cs Sun Microsy 1.5.0_ Claimed Busy 0.000 1002
adelie02.cs Sun Microsy 1.5.0_ Claimed Busy 0.000 1002
…
Total Owner Claimed Unclaimed Matched Preempting
INTEL/LINUX 965 179 516 250 20 0
INTEL/WINNT50 102 6 65 31 0 0
SUN4u/SOLARIS28 1 0 0 1 0 0
X86_64/LINUX 128 2 106 20 0 0
Total 1196 187 687 302 20 0
90. 90
http://www.cs.wisc.edu/condor
Frieda wants Condor features
on remote resources
› She wants to run standard universe
jobs on Grid-managed resources
For matchmaking and dynamic scheduling
of jobs
For job checkpointing and migration
For remote system calls
91. 91
http://www.cs.wisc.edu/condor
Condor GlideIn
› Frieda can use the Grid Universe to run
Condor daemons on Grid resources
› When the resources run these GlideIn
jobs, they will temporarily join her Condor
Pool
› She can then submit Standard, Vanilla,
PVM, or MPI Universe jobs and they will be
matched and run on the remote resources
› Currently only supports Globus GT2
We hope to fix this limitation
94. 94
http://www.cs.wisc.edu/condor
GlideIn Concerns
› What if the remote resource kills my GlideIn job?
That resource will disappear from your pool and your jobs
will be rescheduled on other machines
Standard universe jobs will resume from their last
checkpoint like usual
› What if all my jobs are completed before a
GlideIn job runs?
If a GlideIn Condor daemon is not matched with a job in
10 minutes, it terminates, freeing the resource
95. 95
http://www.cs.wisc.edu/condor
In Review
With Condor’s help, Frieda can:
Manage her compute job workload
Access local machines
Access remote Condor Pools via flocking
Access remote compute resources on
the Grid via “Grid Universe” jobs
Carve out her own personal Condor Pool
from the Grid with GlideIn technology
99. 99
http://www.cs.wisc.edu/condor
Use CondorView!
› Provides visual graphs of current and past
utilization
› Data is derived from Condor's own accounting
statistics
› Interactive Java applet
› Quickly and easily view:
How much Condor is being used
How many cycles are being delivered
Who is using them
Utilization by machine platform or by user
101. 101
http://www.cs.wisc.edu/condor
A Common Question
› My Personal Condor is flocking with a bunch of
Solaris and Linux machines, and also doing a
GlideIn to a SGI O2K. I do not want to statically
partition my jobs.
Solution: In your submit file, specify:
Executable = myjob.$$(OpSys).$$(Arch)
Requirements = (Arch==“INTEL” && OpSys==“LINUX”)
||(Arch==“SUN4u” && OpSys==“SOLARIS8” )
||(Arch==“SGI” && OpSys==“IRIX65”)
The “$$(xxx)” notation is replaced with attributes from
the machine ClassAd which was matched with your job.
a DAG is the best data structure to represent a workflow of jobs with dependencies
children may not run until their parents have finished – this is why the graph is a directed graph … there’s a direction to the flow of work
In this example, called a “diamond” dag, job A must run first; when it finishes, jobs B and C can run together; when they are both finished, D can run; when D is finished the DAG is finished
Loops, where two jobs are both descended from one another, are prohibited because they would lead to deadlock – in a loop, neither node could run until the other finished, and so neither would start – this restriction is what makes the graph acyclic
This is all it takes to specify the example “diamond” dag
Just like any other Condor job, you get fault tolerance in case the machine crashes or reboots, or if there’s a network outage
And you’re notified when it’s done, and whether it succeeded or failed
% condor_q
-- Submitter: foo.bar.edu : <128.105.175.133:1027> : foo.bar.edu
ID OWNER SUBMITTED RUN_TIME ST PRI SIZE CMD
2456.0 user 3/8 19:22 0+00:00:02 R 0 3.2 condor_dagman -f -
First, job A will be submitted alone…
Once job A completes successfully, jobs B and C will be submitted at the same time…
If job C fails, DAGMan will wait until job B completes, and then will exit, creating a rescue file. Job D will not run.
In its log, DAGMan will provide additional details of which node failed and why.
Since jobs A and B have already completed, DAGMan will start by re-submitting job C
If job C fails, DAGMan will wait until job B completes, and then will exit, creating a rescue file.