Zorro, the AU High Performance Computing (HPC) System, is managed by the HPC Liaison Committee; the policies on this page were developed by the committee. The user policies below aim to
- help make the system effective for users
- ensure proper maintenance of the system
- document use of this federally-funded facility
- enable researchers to contribute to the system
This webpage always shows the policies currently in effect. Policies are continually reviewed and updated by the HPC Committee. Questions or concerns should be e-mailed to firstname.lastname@example.org.
It is key that all users have access to the HPC system and that the integrity of all users' code is maintained. Following some of these policies is likely to require that you understand the nature of HPC computing and the configuration of the hardware and software on the system. Please contact email@example.com to ask questions or to report problems.
If a user does not abide by the policies, a representative of the HPC Liaison Committee has the right to terminate the user's jobs and/or to suspend the user's account.
Use the batch submission system of the scheduler.
- Users may not run applications interactively from the login node.
may not log into the compute nodes for the purpose of running a job directly
without permission from the HPC committee.
To request access other than batch submission, please send a request explaining why special access is necessary for your research to firstname.lastname@example.org.
- Unauthorized Interactive jobs or applications found running on the login node(s) or on the compute nodes outside of LSF are subject to immediate termination.
Respond in a timely way to correspondence from the HPC Committee.
All correspondence will be sent to the e-mail address on file with the HPC committee.
Costs of Use
There is no up-front cost for use of the American University HPC System. AU faculty PIs and students are encouraged to use the system to obtain preliminary results and to leverage their preliminary results into external funding (please see User Obligations tab).
Externally funded users are encouraged to contribute funding to the facility. The support requested should reflect resource use expected for the project. A typical project uses resources equivalent of one node of computing power per year. The one node of computing power is currently $1000 per year. PIs may request an estimate of costs of use by emailing email@example.com. The jobs submitted by users (and their team members) who provide financial support are placed in the priority queue (please see Priority tab).
Who May Have An Account?
- Any AU faculty PI whose project has computational needs are beyond the capacity of his or her current workstation may apply for an account. The account can be accessed by all members of the research team on the project. (See Faculty Account Form.)
- Students (undergrad or grad) who are sponsored by an AU faculty PI to do independent research may apply for an account. (See Student Account Form.)
- Users from outside the AU community may also apply for an account for academic research, especially for research that includes an AU co-PI. The HPC Liaison Committee Chair will be happy to assist in locating an AU co-PI. Please send inquiries to firstname.lastname@example.org.
The chair of the HPC Liaison Committee reviews and approves requests for accounts continually. The chair consults the committee regarding special requests. User account status is reviewed annually by the Committee.
User access to compute nodes is managed by software called a scheduler. The scheduler reserves available nodes and other resources according to the queue that a user can access. The queues are managed so that the AU's research capacity is maximized.
There are four queues on the cluster. Three are open to all users; one is restricted. Each queue has a different runtime limit and resources (CPUs) associated with it. As a rule, compute-intensive jobs (such as computing certain statistical estimators) are allowed use more resources over a short period of time. Jobs that require long run times (such as simulations or real-time modeling) are allowed fewer resources so that these jobs do not create bottle-necks for other users.
Queues Open to All Users
Normal: This is the default queue. If you do not specify another queue when you submit a job, your job will run in the normal queue. The normal queue has a runtime limit of 48 hours. Users can request up to 36 CPUs / job slots when they submit a job.
Long: The long queue is for jobs that are expected to run longer than 48 hours. The runtime limit in this queue is 240 hours. Users can request up to 24 CPUs /job slots. If a User finds that jobs submitted to the normal queue are terminated before completion, he or she should resubmit the job to the long queue.
Short: This queue is for compute-intensive jobs that are not expected to run longer than two hours. Runtime limit in this queue is 2 hours. Users can request up to 60 CPUs / job slots. Again, users specify the short queue when the job is submitted.
Special Purpose & Restricted Queues
Priority: The priority queue is reserved for users who provide financial support for the AU HPC System, including its initial funding. Jobs submitted by priority users go to the front of the queue they requested.
Note: System administration and testing of the machine may interrupt jobs. System administrators will notify Users if maintenance or testing is expected to impact job run-time.
Each user has an obligation to help to sustain the AU HPC System. While financial support will help grow the system, the most important way each user sustains the system is by demonstrating that it contributes to ongoing research.
- Acknowledge the use of the HPC system in papers and presentations. A recommended acknowledgement is "Computing resources used for this work provided by the American University High Performance Computing System, which is funded in part by the National Science Foundation (BCS-1039497); see www.american.edu/cas/hpc for information on the system and its uses."
- Confirm that user accounts are still needed when requested.
- Update Titles and Abstracts for each research project being conducted on the HPC system when requested. (Requests are planned to coincide with FARS due dates.)
- Provide information on presentations, working papers, and publications based on work using the HPC system. We are happy to post PDF files of papers or presentations on the facility's webpage or point a link to another webpage. (Requests for updates are planned to coincide with FARS due dates.)
- Be willing to participate as co-PI or co-investigator in future grant proposals in support of the HPC system (faculty users only).
- Include budget requests for computational resources in individual grant proposals. The support requested should reflect resource use expected for the project; use the cost per node for cited above is a guide. HPC Liaison Committee members and users will cheerfully assist with proposals in whatever way will be most helpful (including but not limited to: drafting text, acting as co-PI/co-investigator, supplying a letter of support). E-mail email@example.com early in your proposal process.(PIs with current funding will find instructions for making the internal transfer of Zorro support on the MYAU Zorro Sharepoint site.)
The HPC Liaison Committee manages the AU HPC System. New members will rotate onto the committee for two-year terms.
Contact the Committee at firstname.lastname@example.org.
Duties of the HPC Liaison Committee include
- advising OIT and CTRL staff on decisions regarding set-up, maintenance, and operations of HPC resources
- coordinating with OIT to monitor utilization of processing and storage capacity
- informing faculty (especially new faculty) and students about availability of HPC resources
- reviewing requests for storage of data files over the typical allocation
- reviewing requests to install new software; communicating concerns about system operations from researchers using the HPC resources to OIT and CTRL staff
- tracking outcomes of research projects completed using the HPC system
- reviewing requests for external access
- coordinating training of users in parallel-capable code and applications
- organizing seminars to share research results, to encourage use of the system, and to foster interdisciplinary research