- 23 Mar 2012
We define a campus grids maturity model to help campuses benchmark their efforts and to see what increasingly mature campus grids look like. As an organization promoting campus grid maturation, we also want a way to determine if we're making progress. We define five maturity levels thus:
| CGMM Level
|| Characteristics of organizations at this level
|| No organized or coordinated campus grid effort. Pockets of research computing, typically funded by individual researchers. Little or no support or documentation.
|| Some localized organization around campus grids. Some resource sharing at the departmental or college level. Minimal support and documentation.
|| Campus wide organization and/or broad visibility to campus grids. Good examples of resource sharing exist and there is some ability to utilize resources outside of the campus via partner campuses or the open science grid. Some documentation and local personnel support for campus grid users.
|| Campus wide organization or visibility of campus grids initiatives. Widespread sharing of on and off campus resources. At least part time dedicated personnel support and some documentation for campus grid users.
|| Campus grids are a 'way of life' for campus researchers, with on and off campus resources sharing the default. Mature user-facing documentation and dedicated personnel support for campus grid users.
Center for High Throughput Computing (CHTC) -- UW Madison
(HEP, CMS, GlideIns?
(High Throughput Parallel
(Bosco, Grid Univerise, Condor)
Website: Center for HighThroughput Computing
The CHTC at the University of Wisonsin in Madison, is a federation of about 15,000 CPU cores in roughly four collections or 'pools'. Condor, developed at the UW-Madison, is our default batch scheduler. The UW-Madison is also an OSG member operating as the GLOW Virtual Organization (VO). Our campus resources, coupled with the OSG deliver around 1.5 million CPU hours per week to campus researchers. We do not charge for access to these resources. We have a small team (~2 FTE's spread across ~6 distributed computing specialist) that work one on one with campus researchers to help them move their applications to a distributed HTC model. We typically have around 70 active research projects running on our resources at any given time.
While we consider our campus research computing capabilities as mature, we still have plenty of opportunities. We're currently working on 1) improving our local documentation to make it easier for researchers to help themselves; 2) continuing to provide more 'turn-key' solutions for researchers (like we've done for MatLab?
and R) and 3) working closely with OSG technical staff to understand how to simplify and maximize our access to OSG resources.
Profile coming soon...
Florida International University
Profile coming soon...
Holland Computing Center -- University of Nebraska
Website: Holland Computing Center
HCC is the centralized center for research computing for the University of Nebraska system. HCC manages roughly 16,000 CPU cores in four clusters spread across two campuses, Lincoln and Omaha. The Holland Computing Center is a member of the OSG running as the hcc VO. We have a relatively large team of two application specialists, three grid specialists, and five system administrators that work together to deliver computing to users of HCC.
Our campus research computing is constantly evolving. New resource and new ideas are constantly changing how we use and administer our resources. We are working on improving our interaction with researchers to enable them to manage their computing. Additionally, we aim to provide portals for common applications that run on our campus grid.
Rosen Center for Advanced Computing (RCAC) - Purdue University
(HPC Systems Administrator),
Website: Rosen Center for Advanced Computing
RCAC is the central resource for research computing at Purdue
University. RCAC manages five large community clusters and several
smaller, specialized clusters, totally approximately 50,000 cores. In
addition, RCAC manages the Purdue Condor pools, which scavenge cycles
from the clusters and distributed desktops in academic and
administrative departments. Twelve systems administrators, eight user
support/application specialists, and 16 students support the hardware
and software infrastructure. In addition, the Scientific Solutions
Group provides custom support for HPC/HTC-related projects, including
With the explosion in scale, availability is beginning
to outpace demand. Work is underway to engage new areas of research
that don't historically make use of advanced computing resources.
Another area of focus is the creation of resources targeted
specifically at undergraduate instruction, in order to provide modern
education to Purdue students and increase their competitiveness in
University of Chicago (UC3)
(OSG, Glidein Factory),
(HEP, ATLAS Tier 2, UC3 systems admin)
UC3 - Project wiki
, CycleServer console
(login: guest guest)
UC3 - UChicago Computing Cooperative - is a framework for building a sustainable distributed high throughput computing environment on campus. It leverages the facilities of the ATLAS Midwest Tier 2 and Tier 3 Centers, the University's Information Technology Services, the South Pole Telescope, and the Scientific Imaging and Reconstruction Facility. At present over 6500 cores are available.
A UC3 VO will be created. A VO-frontend service to GlideinWMS is planned to permit overflow to OSG opportunistic resources.
University of Florida (Sunshine Grid)
- VBI -- Virginia Biotech Institute