Log in Page Discussion History Go to the site toolbox

Cloud Submit

From Engineering Grid Wiki

Submitting basic jobs to the Cloud is easy. You'll first want to SSH to 'cloud.seas.wustl.edu' with your SEAS username/password.

You'll need to write a shell script, of any name, that you'll then submit to the Cloud to be run. A simple example is below:



#$ -cwd
/research-projects/eit/mark/my_program_name --flag1 --flag2

As a more concrete example:


#$ -cwd
/cluster/cloud/matlab/bin/matlab -nodisplay -nojvm < my_matlabCode.m
#/cluster/cloud/bin/comsol43a batch ....

You would submit this job from your working directory (it's very important to be in the directory where your data and outputs will reside when you submit your job!) like so:

qsub job.sh

Your job can be anything - a program you've compiled or an application like Matlab. The most important part of a basic grid job is that the program be able to be run unattended - it won't pop up windows, or ask you questions. It will run and end without interaction. The vast majority of research applications have the ability to do this.

The line:

#$ -cwd

is a special option to qsub. It lets you predefine your flags to qsub, so you don't have to remember them any time. -cwd means to execute this job in the current working directory - the directory you were in when you executed qsub. It's very important to be in the directory where your data and outputs will reside when you submit your job!

Here's a few other common flags, and one that's special to the Cloud. More details and more flags can be found doing a 'man qsub' on cloud.seas.wustl.edu. If you use these flags in your qsub script, prepend them with "#$" so Grid Engine parses them correctly.

-N text

Defines the name of the job.

-e filename 

Defines the path of the error stream of the job. This defaults to jobname.ejobid.

-o filename

Defines the path of the output stream of the job. This defaults to jobname.ojobid.

-j y[es]|n[o]

Merge the output and error streams. If -e is set, it is ignored.

-r y[es]|n[o]

Rerun the job if it seems to have crashed.

-v variable=[value],...

Define an environment variable to be exported to the job.

-S filename

Define the interpreting shell for the job.


These two flags control resource requirements. -soft means the requested resources would be nice to have, but not necessary. -hard indicates the resources are required. You can intersperse these flags amongst resource requirement flags - the first preceding option will define whether the following resource requirement is required or not.

-l resource=value

This flag requests resources. Some common options are:

-l mem_free=XXG
-l mem_total=XXG

The two flags above request certain amounts of memory. It's usually best to request mem_free, as your job will find the optimum free memory rather than a possible total.

-l tmp_free=XXG

This flag, unique to the Cloud, specifies the amount of local disk space on a node your job might use. It is your responsibility to move any data to and from this space, either as part of the job script or by hand. Data in this space is deleted after 30 days. The path your jobs should use to access this space is /cluster-tmp - this path is local to each node and is not shared, and not backed up. We recommend that any job requiring heavy disk access make use of the local temp space.

-pe smp4 4

This flag will requests a 4 cpu smp machine, smp8 8 and smp2 2 also work. Since all machines are smp2, smp2 jobs have a better chance of getting a pair of CPUs to run, than smp8 or smp4 jobs, smp4 jobs will run before smp8 job. MATLAB will automatically use extra CPUs in large matrix operations, so ask for a machine that has them available.


qdel <job-number> will kill/delete a job

Site Toolbox:

Personal tools
This page was last modified on 8 March 2013, at 14:15. - This page has been accessed 16,463 times. - Disclaimers - About Engineering Grid Wiki