Zorro, the AU High Performance Computing (HPC) System, is managed by the HPC Committee; the policies on this page were developed by the committee. The user policies below aim to
- help make the system effective for users
- ensure proper maintenance of the system
- document use of this federally-funded facility
- enable researchers to contribute to the system
This webpage always shows the policies currently in effect. Policies are continually reviewed and updated by the HPC Committee. Questions or concerns should be e-mailed to email@example.com.
It is key that all users have access to the HPC system and that the integrity of all users' code is maintained. Following some of these policies is likely to require that you understand the nature of HPC computing and the configuration of the hardware and software on the system. Please contact firstname.lastname@example.org to ask questions or to report problems.
If a user does not abide by the policies, a representative of the HPC Committee has the right to terminate the user's jobs and/or to suspend the user's account.
Use the batch submission system of the scheduler.
- Users may not run applications interactively from the login node.
may not log into the compute nodes for the purpose of running a job directly
without permission from the HPC committee.
To request access other than batch submission, please send a request explaining why special access is necessary for your research to email@example.com.
- Unauthorized Interactive jobs or applications found running on the login node(s) or on the compute nodes outside of LSF are subject to immediate termination.
Respond in a timely way to correspondence from the HPC Committee.
All correspondence will be sent to the e-mail address on file with the HPC committee.
Costs of Use
There is no up-front cost for use of the American University HPC System. As of March 16, 2022, grants accounting has determined that use of Zorro is included in overhead costs.
Who May Have An Account?
- Any AU faculty PI whose project has computational needs are beyond the capacity of his or her current workstation may apply for an account. The account can be accessed by all members of the research team on the project. (See Faculty Account Form.)
- Students (undergrad or grad) who are sponsored by an AU faculty PI to do independent research may apply for an account. (See Student Account Form.)
- Users from outside the AU community may also apply for an account for academic research, especially for research that includes an AU co-PI. The HPC Liaison Committee Chair will be happy to assist in locating an AU co-PI. Please send inquiries to firstname.lastname@example.org.
The chair of the HPC Liaison Committee reviews and approves requests for accounts continually. The chair consults the committee regarding special requests. User account status is reviewed annually by the Committee.
User access to compute nodes is managed by software called a scheduler. The scheduler reserves available nodes and other resources according to the queue that a user can access. The queues are managed so that the AU's research capacity is maximized.
There are four queues on the cluster. Three are open to all users; one is restricted. Each queue has a different runtime limit and resources (CPUs) associated with it. As a rule, compute-intensive jobs (such as computing certain statistical estimators) are allowed use more resources over a short period of time. Jobs that require long run times (such as simulations or real-time modeling) are allowed fewer resources so that these jobs do not create bottle-necks for other users.
Queues Open to All Users
Normal: This is the default queue. If you do not specify another queue when you submit a job, your job will run in the normal queue. The normal queue has a runtime limit of 48 hours. Users can request up to 36 CPUs / job slots when they submit a job.
Long: The long queue is for jobs that are expected to run longer than 48 hours. The runtime limit in this queue is 240 hours. Users can request up to 24 CPUs /job slots. If a User finds that jobs submitted to the normal queue are terminated before completion, he or she should resubmit the job to the long queue.
Short: This queue is for compute-intensive jobs that are not expected to run longer than two hours. Runtime limit in this queue is 2 hours. Users can request up to 60 CPUs / job slots. Again, users specify the short queue when the job is submitted.
Special Purpose & Restricted Queues
Priority: The priority queue is reserved for users who provide financial support for the AU HPC System, including its initial funding. Jobs submitted by priority users go to the front of the queue they requested.
Note: System administration and testing of the machine may interrupt jobs. System administrators will notify Users if maintenance or testing is expected to impact job run-time.
Each user has an obligation to help to sustain the AU HPC System. While financial support will help grow the system, the most important way each user sustains the system is by demonstrating that it contributes to ongoing research.
- Acknowledge the use of the HPC system in papers and presentations. A recommended acknowledgement is "Computing resources used for this work provided by the American University High Performance Computing System, which is funded in part by the National Science Foundation (BCS-1039497); see www.american.edu/cas/hpc for information on the system and its uses."
- Confirm that user accounts are still needed when requested.
- Update Titles and Abstracts for each research project being conducted on the HPC system when requested. (Requests are planned to coincide with FARS due dates.)
- Provide information on presentations, working papers, and publications based on work using the HPC system. We are happy to post PDF files of papers or presentations on the facility's webpage or point a link to another webpage. (Requests for updates are planned to coincide with FARS due dates.)
- Be willing to participate as co-PI or co-investigator in future grant proposals in support of the HPC system (faculty users only).
The newly-reorganized HPC Committee manages the AU HPC System. More information will be posted soon.
- Contact: email@example.com
- Mike Alonzo
Professor, Environmental Science
- Raychelle Burks
- Benjamin Djain
- Ignacio Gonzalez Garcia
Assistant Professor, Economics
- Jessica Uscinski
Senior Professorial Lecturer, Department of Physics