by Harry Mangalam <harry.mangalam@uci.edu> v1.00, July 15th, 2013

1. Rationale

Occasionally, UCI research groups will request that External Collaborators (ECs) from other institutions have access to the HPC Linux cluster. This can range from storage of common data to shared analysis of that data for a collaborative research project.

Since HPC is a UCI resource, funded and supported in the main from UCI for the benefit of UCI researchers, we believe the following is a equitable way of allowing such collaborations without depriving other UCI researchers of their resources.

2. Conditions for Use

The following points describe the conditions for ECs to use the HPC Linux cluster.

  • Since the HPC is a mixed condo cluster (composed of both institutional and user-contributed resources), the ECs will only be able to store data and perform computations on the owners' hardware (or share in their allocated storage on common filesystems). The ECs will not be able to submit jobs to the system Free Queues. This means that for significant storage, the UCI groups should purchase or rent storage on HPC that is equal to the amount the ECs want to store.

  • ECs can purchase hardware to install on HPC in support of this collaboration, with the same support structure that UCI researchers have, with the ECs retaining the ownership of the hardware with the right to extract their hardware when they want (there will be a longer extraction time for distributed storage nodes). Again, this does not allow them to run their jobs on the Free Queues, nor to store data on common filesystems unless they have contributed to its expansion by an equivalent amount. Hardware to be contributed to HPC will have to meet our basic HPC requirements. Please consult with us (Joseph Farran <jfarran@uci.edu> or Harry Mangalam <harry.mangalam@uci.edu> before purchasing compute or storage hardware for such a project.

  • ECs will have access to all Open Source or freely available software and other resources on HPC but not to proprietary software (Mathematica, Matlab, SPSS) unless paid for by the UCI collaborators or themselves.

  • System Administration will only be available to ECs on an as can be spared basis. UCI sysadmins will not do administration of the ECs remote systems.

  • The same basic HPC hardware requirements (age, CPU & RAM, warrantees) and UCI computer user rules and conduct will apply to ECs. Please contact Joseph Farran <jfarran@uci.edu> to discuss these requirements.

  • Terminal Access to HPC will only be via ssh and only via shared ssh keys (no passwords). As such, ECs will be expected to provide their public ssh keys.

  • Data transfer to/from HPC can be done by the usual network tools (scp, rsync, bbcp, etc) as well as via the Globus Connect system, as long as the ECs provide a Globus Connect endpoint.

  • Data on HPC is NOT BACKED UP and ECs will be expected to provide off-site replication for valuable data. The HPC filesystems are provided as scratch space only.

3. Agreement

We agree to the above conditions for external use of the HPC cluster.

Desired Starting Date:
Ending Date, if known or desired:

3.1. Contributed Hardware Description

Serial Number/Chassis ID:
Make/Model:
CPUs:
RAM:
Onboard storage:
Optional devices:
   (GPUs, specialty disk or network controllers)
Spare devices provided:
   (extra disks, cables, power supplies)
Provide or attach a description of device.
   ('lshw' output, sales invoice or quote with description)
EC institution property tag # if known:

3.2. Participants

UCI Sponsor

Name          :
Position      :
Dept          :
School        :
Email         :
Phone number  :

Signature     :

External Collaborator

Name          :
Position      :
Dept          :
School        :
Institution   :
Email         :
Phone number  :

Signature     :