Scientific Computing Resource (SCR)

The Scientific Computing Resource group was formed in January 2012 to provide computational support and expertise for the other center resource groups and for center researchers. The computing resource also provides a liaison between the SCSB researchers and other national computational facilities, such as the Texas Advanced Computing Center in Austin, the supercomputing centers at the University of Califonia San Diego and at the Oak Ridge National Laboratory. The resource also collaborates with researchers on the UTMB campus as a whole as well as those in the institution members of the Gulf Coast Consortium.

All of our computational, modeling and visualization resources, computing hardware support, high performance Linux system software support, and scientific software support services are all available to SCSB members as well as the UTMB campus as a whole and by collaboration arrangement with researchers at other institutions.

Our staff all have many years of experience with both high performance scientific computing platforms as well as scientific software and are available for technical support.

Documentation

Instruments

  • Staging Cluster

    Computing Cluster

    Computing Cluster

    Most UTMB researchers who require large scale supercomputing centers for their work need a local platform to develop and debug new software and control scripts where the response time does not require long waiting times in submission queues. To fill this need we have provided small staging cluster with inhomogeneous nodes (5 compute nodes and a control node), with a queuing system (SLURM) identical to those at the national supercomputing centers. Researchers who develop and debug their projects on the staging cluster can be confident that their projects will transfer quickly and easily to a supercomputing center for production runs. We do allow some production runs on the staging cluster as availability permits.

    The cluster compute nodes are of 3 types:

    • Two nodes designed for predominantly GPGPU based computing contain nVidia Tesla K20 GPUs, each with 2480 cuda cores, as well as dual Intel E5-2650 cpus with a total of 32 compute cores.
    • Two nodes designed for predominantly CPU based computing, but with some GPGPU support, contain nVidia Tesla M2075 GPU's each with 480 cuda cores, as well as dual Intel E5-2670 cpus with a total of 32 compute cores.
    • One node, designed for predominantly high memory requirement computing, is a Dell Poweredge R620 and contains 768 Gigabytes of main memory and two E5-2600 class cpus.

  • Data Storage

    File Storage

    File Storage

    The SCR has obtained a large data storage facility to help those researchers with large data needs, and to house the CryoEM database. It is a JBOD based system which is currently installed with 130 Terabytes of space, and is expandable to a maximum size of 200 Terabytes. It is configured in a Raid 5 redundancy system using the on-the-fly reconfigurable ZFS file system with hot-swappable disks.

  • 3D Printing

    Our 3D printing facility houses two printers:
    • A Projet 660pro, which is a full color powder/epoxy type printer capable of producing models up to 8"x10"x15". It is used primarily to create models of molecular systems for our center members, but has also been used in collaboration with other researchers in the Gulf Coast Consortium.
    • A Makerbot Replicator 2X, which is a heated extruder type printer capable of printing in two colors concurrently, and can print in either PLA or SCA type plastic, and has a printing volume of 10" x 6" x 6". It has been used primarily for durable models, such as unusually shaped experimental sample containers.
    • ProJet 660Pro ProJet 660Pro UTMB Model UTMB Model Virus Virus Protein Protein in cartoon
      vdw_surface.jpg VDW surface Flask Flask
    We are willing to attempt any research related model structure, but request that adequate lead time be allocated before the model is required.
  • Visualization

    Visualization

    Visualization Display Systems

    The SCR is in the process of creating a distributed scientific visualization facility, composed of Visualization Display Systems (VDS's) located at strategic locations around campus. Our first VDS has been deployed in the CryoEM facility in MRB, and two additional VDS's are under construction. One will be deployed in the Scientific Computing Collaboration Area in Research Building 6, and the other in the X-ray facility in BSB. Each VDS has a large screen array and is driven by high performance graphics workstations on a local high speed (10Gbit ethernet) network. Each VDS is capable of double high definition (2160p) resolution on each screen in the array, as well as 3D stereo visualization. Each VDS will also be equipped with a storage server for immediate visualization of experimental or simulation data as it becomes available. The SCR is also collaborating with the new UTMB campus IS network deployment to link the VDS's by deploying 10 Gbit network directly to the VDS controlling workstations, as well as to large data storage in the Administration Building data center.
  • Scientific Software

    The SCR maintains a large library of current scientific software which is available for use by SCSB members and by other researchers on the UTMB campus. The library contains all the most heavily used open source titles as well as many lesser known packages. The library also contains some proprietary and licensed software. All software and licenses are kept up-to-date, and are available to all researchers who meet the terms of the particular license.

Managers

Gillian C. Lynch, PhD
E-mail: gclynch@utmb.edu
Tel: (409) 772-0721

John S. Perkyns, PhD
E-mail: jsperkyn@utmb.edu
Tel: (409) 772-0722

Emergency Plan