Memorandum of Understanding for External Use of HPC =================================================== by Harry Mangalam v1.00, July 15th, 2013 // last linkchecked July 15th, 2013 //Harry Mangalam mailto:harry.mangalam@uci.edu[harry.mangalam@uci.edu] // Convert this file to HTML & move to it's final dest with the command: // export fileroot="/home/hjm/nacs/mou-external-use-of-hpc"; asciidoc -a icons -a toc2 -b html5 -a numbered ${fileroot}.txt; scp ${fileroot}.[ht]* moo:~/public_html == Rationale Occasionally, UCI research groups will request that External Collaborators (ECs) from other institutions have access to the http://hpc.oit.uci.edu[HPC Linux cluster]. This can range from storage of common data to shared analysis of that data for a collaborative research project. Since HPC is a UCI resource, funded and supported in the main from UCI for the benefit of UCI researchers, we believe the following is a equitable way of allowing such collaborations without depriving other UCI researchers of their resources. == Conditions for Use The following points describe the conditions for ECs to use the HPC Linux cluster. - Since the HPC is a mixed 'condo cluster' (composed of both institutional and user-contributed resources), the ECs will only be able to store data and perform computations on the owners' hardware (or share in their allocated storage on common filesystems). The ECs will not be able to submit jobs to the http://hpc.oit.uci.edu/free-queue[system Free Queues]. This means that for significant storage, the UCI groups should purchase or rent storage on HPC that is equal to the amount the ECs want to store. - ECs can http://hpc.oit.uci.edu/add-compute.html[purchase hardware to install on HPC] in support of this collaboration, with the same support structure that UCI researchers have, with the ECs retaining the ownership of the hardware with the right to extract their hardware when they want (there will be a longer extraction time for distributed storage nodes). Again, this does not allow them to run their jobs on the 'Free Queues', nor to store data on common filesystems unless they have contributed to its expansion by an equivalent amount. Hardware to be contributed to HPC will have to meet our basic HPC requirements. Please consult with us (Joseph Farran or Harry Mangalam before purchasing compute or storage hardware for such a project. - ECs will have access to all https://en.wikipedia.org/wiki/Open-source_software[Open Source] or freely available software and other resources on HPC but not to proprietary software (Mathematica, Matlab, SPSS) unless paid for by the UCI collaborators or themselves. - System Administration will only be available to ECs on an 'as can be spared' basis. UCI sysadmins will not do administration of the ECs remote systems. - The same basic HPC hardware requirements (age, CPU & RAM, warrantees) and UCI computer user rules and conduct will apply to ECs. Please contact mailto:jfarran@uci.edu[Joseph Farran ] to discuss these requirements. - Terminal Access to HPC will only be via ssh and only via http://www.ece.uci.edu/~chou/ssh-key.html[shared ssh keys] (no passwords). As such, ECs will be expected to provide their public ssh keys. - Data transfer to/from HPC can be done by the usual network tools (http://en.wikipedia.org/wiki/Secure_copy[scp], http://en.wikipedia.org/wiki/Rsync[rsync], http://www.slac.stanford.edu/~abh/bbcp/[bbcp], etc) as well as via the https://www.globusonline.org/globus_connect/[Globus Connect] system, as long as the ECs provide a Globus Connect endpoint. - Data on HPC is *NOT BACKED UP* and ECs will be expected to provide off-site replication for valuable data. The HPC filesystems are provided as scratch space only. == Agreement We agree to the above conditions for external use of the HPC cluster. --------------------------------------------------------------- Desired Starting Date: --------------------------------------------------------------- --------------------------------------------------------------- Ending Date, if known or desired: --------------------------------------------------------------- === Contributed Hardware Description --------------------------------------------------------------- Serial Number/Chassis ID: Make/Model: CPUs: RAM: Onboard storage: Optional devices: (GPUs, specialty disk or network controllers) Spare devices provided: (extra disks, cables, power supplies) Provide or attach a description of device. ('lshw' output, sales invoice or quote with description) EC institution property tag # if known: --------------------------------------------------------------- === Participants *UCI Sponsor* --------------------------------------------------------------- Name : Position : Dept : School : Email : Phone number : Signature : --------------------------------------------------------------- *External Collaborator* --------------------------------------------------------------- Name : Position : Dept : School : Institution : Email : Phone number : Signature : ---------------------------------------------------------------