Advanced SGE: Task Arrays

Why would I want to use task arrays?

Situations often arise when you want to run many almost identical jobs simultaneously, perhaps running the same program many times but changing the input data or some argument or parameter. One possible solution is to write a Python or Perl script to create all the qsub files and then write a BASH script to execute them. This is very time consuming and might end up submitting many more jobs to the queue than you actually need to. This is a typical problem suited to an SGE task array.

  • only one qsub command is issued (and only one qdel command would be required to delete all jobs)
  • only one entry appears in qstat
  • the load on the SGE submit node is much less than that of submitting many separate jobs
  • it is much easier for the user (you) to keep track of your related jobs

So how can I use them?

The easiest way to think of a task array is as a job script with a built-in FOR loop. It makes use of an environment variable created by Sun Grid Engine- $SGE_TASK_ID . Consider this simple job submission script ( ):

It would be executed through the queues as normal using qsub .

As far as SGE is concerned, this is equivalent to 150 individual queue submissions in which $SGE_TASK_ID takes all the
values between 1 and 150 (inclusive), and where input and output files are indexed by the same ID.

In this example, the ‘input files’ would take the form data.1 , data.2 , data.3 etc. and the program would create output files of the form results.1 , results.2 , results.3 etc. (all in the current working directory). The example would require all input
files and put all output files into the same directory (the current working directory, defined by -cwd ).

As the script executes, the variable $SGE_TASK_ID will be replaced by the values indicated in the #$ -t 1-150 clause. So in this case, 1,2,3…150.

A small adjustment to the script would allow each job to run from a separate directory:

The SGE ‘loop counter’

These examples use $SGE_TASK_ID starting at 1 and incrementing by 1 to some upper bound. It is a little more flexible than this however.

would allow $SGE_TASK_ID to take values from 200 (lower bound) to 445 (upper bound), incrementing by 5.

Further examples

Random filenames

The previous examples require all input files to be neatly indexed by number, but often this is not the case. If you have a list of randomly named files, then it is still possible to use $SGE_TASK_ID . Assume that you have a text file files.txt listing all of your input filenames (one per line) and that you know how many files you have (50 in the example below):

As the ‘loop’ in the script executes, it will read the filenames one by one from files.txt , assign them to the variable $infile and direct them into your program.

Restricting the maximum number of concurrent tasks

In some situations it might be necessary to restrict the maximum number of tasks running at any point in time. This is a useful feature to ensure that your jobs don’t restrict other user jobs running.

To do this, add a -tc clause to the submission script:

This would run 5000 tasks but limit just 50 to run at any point in time. **On the MARC1 cluster, we ask you to limit your use of the machine to a maximum of 100 cores at any one time** (ie. 100 single core jobs, 50 2-core jobs etc.)

Using the SGE environment variables

The scheduler sets a number of environment variables during array jobs: $SGE_TASK_FIRST$ , $SGE_TASK_LAST and $SGE_STEP_SIZE . If you need your script to make things happen at the start or end of an array job, then this script shows how this can be done: