12. a review of condor, sge and pbs

2,468 views

Published on

Condor is a resource management and job scheduling system, a research project from University of Wisconsin–Madison.

Published in: Education, Technology, Business

12. a review of condor, sge and pbs

  1. 1. GRID COMPUTING A REVIEW OF CONDOR, SGE AND PBS Sandeep Kumar Poonia Head of Dept. CS/IT, Jagan Nath University, Jaipur B.E., M. Tech., UGC-NET LM-IAENG, LM-IACSIT,LM-CSTA, LM-AIRCC, LM-SCIEI, AM-UACEE 11/16/2013 Sandeep Kumar Poonia 1
  2. 2. Condor Condor is a resource management and job scheduling system, a research project from University of Wisconsin–Madison. Condor platforms         HP systems running HPUX10.20 Sun SPARC systems running Solaris 2.6/2.7/8/9 SGI systems running IRIX 6.5 (not fully supported) Intel x86 systems running Redhat Linux 7.1/7.2/7.3/8.0/9.0, Windows NT4.0, XP and 2003 Server (the Windows systems are not fully supported) ALPHA systems running Digital UNIX 4.0, Redhat Linux 7.1/7.2/7.3 and Tru64 5.1 (not fully supported) PowerPC systems running Macintosh OS X and AIX 5.2L (not fully supported) Itanium systems running Redhat 7.1/7.2/7.3 (not fully supported) Windows systems (not fully supported).
  3. 3. The architecture of a Condor pool Resources in Condor are normally organized in the form of Condor pools. A pool is an administrated domain of hosts, not specifically dedicated to a Condor environment.
  4. 4.  Condor pool normally has one Central Manager (master host) and an arbitrary number of Execution (worker) hosts.  A Condor Execution host can be configured as a job Execution host or a job Submission host or both.  The Central Manager host is used to manage resources and jobs in a Condor pool.  Host machines in a Condor pool may not be dedicated to Condor.  If the Central Manager host in a Condor pool crashes, jobs that are already running will continue to run unaffected.  Queued jobs will remain in the queue unharmed, but they cannot begin running until the Central Manager host is restarted.
  5. 5. Daemons in a Condor pool A daemon is a program that runs in the background once started. To configure a Condor pool, the following Condor daemons need to be started.
  6. 6. Daemons in a Condor pool  The condor_master daemon runs on each host in a Condor pool to keep all the other daemons running in the pool.  It spawns daemons such as condor_startd and condor_schedd, and periodically checks if there are new binaries installed for any of these daemons.  If so, the condor_master will restart the affected daemons.  In addition, if any daemon crashes, the master will send an email to the administrator of the Condor pool and restart the daemon.  The condor_master also supports various administrative commands, such as starting, stopping or reconfiguring daemons remotely.
  7. 7. Daemons in a Condor pool  The condor_startd daemon runs on each host in a Condor pool.  It advertises information related to the node resources to the condor_collector daemons running on the Master host for matching pending resource requests.  This daemon is also responsible for enforcing the policies that resource owners require, which determine under what conditions remote jobs will be started, suspended, resumed, vacated or killed.  When the condor_startd is ready to execute a Condor job on an Execution host, it spawns the condor_starter.
  8. 8. Daemons in a Condor pool  The condor_starter daemon only runs on Execution hosts.  It is the condor_starter that actually spawns a remote Condor job on a given host in a Condor pool.  The condor_starter daemon sets up the execution environment and monitors the job once it is running.  When a job completes, the condor_starter sends back job status information to the job Submission node and exits.
  9. 9. Daemons in a Condor pool  The condor_schedd daemon running on each host in a Condor pool deals with resource requests.  User jobs submitted to a node are stored in a local job queue managed by the condor_schedd daemon.  Condor command-line tools such as condor_submit, condor_q or condor_rm interact with the condor_schedd daemon to allow users to submit a job into a job queue, and to view and manipulate the job queue.  If the condor_schedd is down on a given machine, none of these commands will work.
  10. 10. Daemons in a Condor pool  The condor_shadow daemon only runs on Submission hosts in a Condor pool and acts as the resource manager for user job submission requests.  The condor_shadow daemon performs remote system calls allowing jobs submitted to Condor to be checkpointed.  Any system call performed on a remote Execution host is sent over the network, back to the condor_shadow daemon on the Submission host, and the results are also sent back to the Submission host.
  11. 11. Daemons in a Condor pool The condor_collector daemon only runs on the Central Manager host. This daemon interacts with condor_startd and condor_schedd daemons running on other hosts to collect all the information about the status of a Condor pool such as job requests and resources available The condor_status command can be used to query the condor_collector daemon for specific status information about a Condor pool.
  12. 12. Daemons in a Condor pool  The condor_negotiator daemon only runs on the Central Manager host and is responsible for matching a resource with a specific job request within a Condor pool.  Periodically, the condor_negotiator daemon starts a negotiation cycle, where it queries the condor_collector daemon for the current state of all the resources available in the pool.  It interacts with each condor_schedd daemon running on a Submission host that has resource requests in a priority order, and tries to match available resources with those requests.
  13. 13. Daemons in a Condor pool  The condor_kbdd daemon only runs on an Execution host installing Digital Unix or IRIX.  On these platforms, the condor_startd daemon cannot determine console (keyboard or mouse) activity directly from the operating system.  The condor_kbdd daemon connects to an X Server and periodically checks if there is any user activity.  If so, the condor_kbdd daemon sends a command to the condor_startd daemon running on the same host.
  14. 14. Daemons in a Condor pool  The condor_ckpt_server daemon runs on a checkpoint server, which is an Execution host, to store and retrieve checkpointed files.  If a checkpoint server in a Condor pool is down, Condor will revert to sending the checkpointed files for a given job back to the job Submission host.
  15. 15. Job life cycle in Condor
  16. 16. Job life cycle in Condor 1. Job submission: A job is submitted by a Submission host with condor_submit command
  17. 17. Job life cycle in Condor 2. Job request advertising: Once it receives a job request, the condor_schedd daemon on the Submission host advertises the request to the condor_collector daemon running on the Central Manager host
  18. 18. Job life cycle in Condor 3. Resource advertising: Each condor_startd daemon running on an Execution host advertises resources available on the host to the condor_collector daemon running on the Central Manager host.
  19. 19. Job life cycle in Condor 4. Resource matching: The condor_negotiator daemon running on the Central Manager host periodically queries the condor_collector daemon (Step 4) to match a resource for a user job request. 5. It then informs the condor_schedd daemon running on the Submission host of the matched Execution host
  20. 20. Job life cycle in Condor 6. Job execution: The condor_schedd daemon running on the job Submission host interacts with the condor_startd daemon running on the matched Execution host (Step 6), 7. which will spawn a condor_starter daemon (Step 7).
  21. 21. Job life cycle in Condor 8. The condor_schedd daemon on the Submission host spawns a condor_shadow daemon 9. to interact with the condor_starter daemon for job execution. 10. The condor_starter daemon running on the matched Execution host receives a user job to execute.
  22. 22. Job life cycle in Condor 11. Return output: When a job is completed, the results will be sent back to the Submission host by the interaction between the condor_shadow daemon running on the Submission host and the condor_starter daemon running on the matched Execution host
  23. 23. Security management in Condor  Condor provides strong support for authentication, encryption, integrity assurance, as well as authorization.  A Condor system administrator using configuration macros enables most of these security features.  When Condor is installed, there is no authentication, encryption, integrity checks or authorization checks in the default configuration settings.  This allows newer versions of Condor with security features to work or interact with previous versions without security support.  An administrator must modify the configuration settings to enable the security features.
  24. 24. Job management in Condor Condor manages jobs in the following aspects. Job A Condor job is a work unit submitted to a Condor pool for execution. Job types Jobs that can be managed by Condor are executable sequential or parallel codes, using, for example, PVM or MPI. A job submission may involve a job that runs over a long period, a job that needs to run many times or a job that needs many machines to run in parallel. Queue Each Submission host has a job queue maintained by the condor_schedd daemon running on the host. A job in a queue can be removed and placed on hold.
  25. 25. Job management in Condor Job status A job can have one of the following status: • Idle: There is no job activity. • Busy: A job is busy running. • Suspended: A job is currently suspended. • Vacating: A job is currently checkpointing. • Killing: A job is currently being killed. • Benchmarking: The condor_startd is running benchmarks.
  26. 26. Job management in Condor Job run-time environments The Condor universe specifies a Condor execution environment. There are seven universes in Condor 6.6.3 as described below.  The default universe is the Standard Universe (except where the configuration variable DEFAULT_UNIVERSE defines it otherwise), and tells Condor that this job has been re-linked via condor_compile with Condor libraries and therefore supports checkpointing and remote system calls.  The Vanilla Universe is an execution environment for jobs which have not been linked with Condor libraries; and it is used to submit shell scripts to Condor.
  27. 27. Job management in Condor  The PVM Universe is used for a parallel job written with PVM 3.4.  The Globus Universe is intended to provide the standard Condor interface to users who wish to start Globus jobs from Condor. Each job queued in the job submission file is translated into the Globus Resource Specification Language (RSL) and subsequently submitted to Globus via the Globus GRAM protocol.  The MPI Universe is used for MPI jobs written with the MPICH package.  The Java Universe is used for programs written in Java.  The Scheduler Universe allows a Condor job to be executed on the host where the job is submitted. The job does not need matchmaking for a host and it will never be preempted.
  28. 28. Job management in Condor Job submission with a shared file system If Vanilla, Java or MPI jobs are submitted without using the file transfer mechanism, Condor must use a shared file system to access input and output files. In this case, the job must be able to access the data files from any machine on which it could potentially run.
  29. 29. Job management in Condor Job submission without a shared file system  Condor also works well without a shared file system.  A user can use the file transfer mechanism in Condor when submitting jobs.  Condor will transfer any files needed by a job from the host machine where the job is submitted into a temporary working directory on the machine where the job is to be executed.  Condor executes the job and transfers output back to the Submission machine.
  30. 30. Job management in Condor Job priority Job priorities allow the assignment of a priority level to each submitted Condor job in order to control the order of execution. The priority of a Condor job can be changed. Chirp I/O The Chirp I/O facility in Condor provides a sophisticated I/O functionality. It has two advantages over simple whole-file transfers.  First, the use of input files is done at run time rather than submission time.  Second, a part of a file can be transferred instead of transferring the whole file.
  31. 31. Resource management in Condor Condor manages resources in a Condor pool in the following aspects. Tracking resource usage The condor_startd daemon on each host reports to the condor_collector daemon on the Central Manager host about the resources available on that host. User priority Condor hosts are allocated to users based upon a user’s priority. A lower numerical value for user priority means higher priority, so a user with priority 5 will get more resources than a user with priority 50.
  32. 32. Job scheduling policies in Condor Job scheduling in a Condor pool is not strictly based on a firstcome- first-server selection policy. Rather, to keep large jobs from draining the pool of resources, Condor uses a unique updown algorithm that prioritizes jobs inversely to the number of cycles required to run the job. Condor supports the following policies in scheduling jobs.  First come first serve: This is the default scheduling policy.  Preemptive scheduling: Preemptive policy lets a pending high-priority job take resources away from a running job of lower priority.  Dedicated scheduling: Dedicated scheduling means that jobs scheduled to dedicated resources cannot be preempted.
  33. 33. Resource matching in Condor  Resource matching is used to match an Execution host to run a selected job or jobs.  The condor_collector daemon running on the Central Manager host receives job request advertisements from the condor_schedd daemon running on a Submission host and resource availability advertisements from the condor_startd daemon running on an Execution host.  A resource match is performed by the condor_negotiator daemon on the Central Manager host by selecting a resource based on job requirements.
  34. 34. Condor support in Globus  Jobs can be submitted directly to a Condor pool from a Condor host, or via Globus (GT2 or earlier versions of Globus).  The Globus host is configured with Condor jobmanager provided by Globus.  When using a Condor jobmanager, jobs are submitted to the Globus resource, e.g. using globus_job_run.  However, instead of forking the jobs on the local machine, jobs are re-submitted by Globus to Condor using the condor_submit tool.
  35. 35. Submitting jobs to a Condor pool via Condor or Globus
  36. 36. Submitting jobs to Globus via Condor-G
  37. 37. Sun Grid Engine The SGE is a distributed resource management and scheduling system from Sun Microsystems that can be used to optimize the utilization of software and hardware resources in a UNIX-based computing environment. The SGE can be used to find a pool of idle resources and harnesses these resources; also it can be used for normal activities, such as managing and scheduling jobs onto the available resources. The latest version of SGE is Sun N1 Grid Engine (N1GE) version 6
  38. 38. Sun Grid Engine: The architecture of the SGE
  39. 39. Sun Grid Engine: The architecture of the SGE Master host: A single host is selected to be the SGE master host. This host handles all requests from users, makes job-scheduling decisions and dispatches jobs to execution hosts.
  40. 40. Sun Grid Engine: The architecture of the SGE Submit host: Submit hosts are machines configured to submit, monitor and administer jobs, and to manage the entire cluster. Execution host: Execution hosts have the permission to run SGE jobs.
  41. 41. Sun Grid Engine: The architecture of the SGE Administration host: SGE administrators use administration hosts to make changes to the cluster’s configuration, such as changing distributed resource management parameters, configuring new nodes or adding or changing users.
  42. 42. Sun Grid Engine: The architecture of the SGE Shadow master host: While there is only one master host, other machines in the cluster can be designated as shadow master hosts to provide greater availability. A shadow master host continually monitors the master host, and automatically and transparently assumes control in the event that the master host fails. Jobs already in the cluster are not affected by a master host failure.
  43. 43. Daemons in an SGE cluster sge_qmaster – The Master daemon The sge_qmaster daemon is the centre of the cluster’s management and scheduling activities; it maintains tables about hosts, queues, jobs, system load and user permissions. It receives scheduling decisions from sge_schedd daemon and requests actions from sge_execd daemon on the appropriate execution host(s). The sge_qmaster daemon runs on the Master host.
  44. 44. Daemons in an SGE cluster sge_schedd – The Scheduler daemon The sge_schedd is a scheduling daemon that maintains an uptodate view of the cluster’s status with the help of sge_qmaster daemon. It makes the scheduling decision about which job(s) are dispatched to which queue(s). It then forwards these decisions to the sge_qmaster daemon, which initiates the requisite actions. The sge_schedd daemon also runs on the Master host.
  45. 45. Daemons in an SGE cluster sge_execd – The Execution daemon The sge_execd daemon is responsible for the queue(s) on its host and for the execution of jobs in these queues by starting sge_shepherd daemons. Periodically, it forwards information such as job status or load on its host, to the sge_qmaster daemon. The sge_execd daemon runs on an Execute host.
  46. 46. Daemons in an SGE cluster sge_commd – The Communication daemon The sge_commd daemon communicates over a wellknown TCP port and is used for all communication among SGE components. The sge_commd daemon runs on each Execute host and the Master host in an SGE cluster.
  47. 47. Daemons in an SGE cluster sge_shepherd – The Job Control daemon Started by the sge_execd daemon, the sge_shepherd daemon runs for each job being actually executed on a host. The sge_shepherd daemon controls the job’s process hierarchy and collects accounting data after the job has completed.
  48. 48. Job management in SGE SGE supports four job types – batch, interactive, parallel and array. The first three have obvious meanings, the fourth type – array job – is where a single job can be replicated a specified number of times, each differing only by its input data set, which is useful for parameter studies.
  49. 49. Job run-time environments in SGE SGE supports three execution modes – batch, interactive and parallel.  Batch mode is used to run straightforward sequential programs.  In interactive mode, users are given shell access (command line) to some suitable host via, for example, X-windows.  In a parallel mode, parallel programs using the likes of MPI and PVM are supported.
  50. 50. Job selection and resource matching in SGE Jobs submitted to the Master host in an SGE cluster are held in a spooling area until the scheduler determines that the job is ready to run. SGE matches the available resources to a job’s requirements; for example, matching the available memory, CPU speed and available software licences, which are periodically collected by Execution hosts. The requirements of the jobs may be very different and only certain hosts may be able to provide the corresponding services. Once a resource becomes available for execution of a new job, SGE dispatches the job with the highest priority and matching requirements.
  51. 51. Job run-time environments in SGE Fundamentally, SGE uses two sets of criteria to schedule jobs – job priorities and equal share. Job priorities This criterion concerns the order of the scheduling of different jobs, a first-in-first-out (FIFO) rule is applied by default. Equal-share scheduling The FIFO rule sometimes leads to problems, especially when users tend to submit a series of jobs at almost the same time. All the jobs that are submitted in this case will be designated to the same group of queues and will have to potentially wait a very long time before executing. equal-share scheduling avoids this problem by sorting the jobs of a user already owning an executing job to the end of the precedence list.
  52. 52. Submitting jobs to an N1GE cluster via N1GE or Globus
  53. 53. The Portable Batch System (PBS)  The PBS is a resource management and scheduling system.  It accepts batch jobs (shell scripts with control attributes), preserves and protects the job until it runs; it executes the job, and delivers the output back to the submitter.  A batch job is a program that executes on the backend computing environment without further user interaction.  PBS may be installed and configured to support jobs executing on a single computer or on a cluster-based computing environment.  PBS is capable of allowing its nodes to be grouped into many configurations.
  54. 54. The Portable Batch System (PBS) The PBS architecture PBS uses a Master host and an arbitrary number of Execution and job Submission hosts. The Master host is the central manager of a PBS cluster; a host can be configured as a Master host and an Execution host.
  55. 55. The Portable Batch System (PBS) Daemons in a PBS cluster pbs_server: The pbs_server daemon only runs on the PBS Master host (server). Its main function is to provide the basic batch services, such as receiving/creating a batch job, modifying a job, protecting the job against system crashes and executing the job.
  56. 56. The Portable Batch System (PBS) pbs_mom: The pbs_mom daemon runs on each host and is used to start, monitor and terminate jobs, under instruction from the pbs_server daemon.
  57. 57. The Portable Batch System (PBS) pbs_sched: The pbs_sched daemon runs on the Master host and determines when and where to run jobs. It requests job state information from pbs_server daemon and resource state information from pbs_mom daemon and then makes decisions for scheduling jobs.
  58. 58. Job selection in PBS Jobs submitted to PBS are put in job queues. Jobs can be sequential or parallel codes using MPI. A server can manage one or more queues; a batch queue consists of a collection of zero or more jobs and a set of queue attributes. Jobs reside in the queue or are members of the queue. In spite of the name, jobs residing in a queue need not be ordered with FIFO.
  59. 59. Resource matching in PBS In PBS, resources can be identified either explicitly through a job control language, or implicitly by submitting the job to a particular queue that is associated with a set of resources. Once a suitable resource is identified, a job can be dispatched for execution. PBS clients have to identify a specific queue to submit to in advance, which then fixes the set of resources that may be used; this hinders further dynamic and qualitative resource discovery.
  60. 60. Jobs can be submitted to or from a PBS cluster to Globus
  61. 61. Additional PBS Pro services include:  Cycle harvesting: PBS Pro can run jobs on idle workstations and suspend or re-queue the jobs when the workstation becomes used, based on either load average or keyboard/mouse input.  Site-defined resources: A site can define one or more resources which can be requested by jobs. If the resource is “consumable”, it can be tracked at the server, queue and/or node level.
  62. 62. Additional PBS Pro services include:  “Peer to Peer” scheduling: A site can have multiple PBS Pro clusters (each cluster has its server, scheduler and one or more execution systems). A scheduler in any given cluster can be configured to move jobs from other clusters to its cluster when the resources required by the job are available locally.  Advance reservations: Resources, such as nodes or CPUs, can be reserved in advance with a specified start and end time/date. Jobs can be submitted against the reservation and run in the time period specified. This ensures the required computational resources are available when time-critical work must be performed.

×