-- DanFraser - 23 Mar 2012

Campus Grids Maturity Model (CGMM)

We define a campus grids maturity model to help campuses benchmark their efforts and to see what increasingly mature campus grids look like. As an organization promoting campus grid maturation, we also want a way to determine if we're making progress. We define five maturity levels thus:

CGMM Level Characteristics of organizations at this level
1 No organized or coordinated campus grid effort. Pockets of research computing, typically funded by individual researchers. Little or no support or documentation.
2 Some localized organization around campus grids. Some resource sharing at the departmental or college level. Minimal support and documentation.
3 Campus wide organization and/or broad visibility to campus grids. Good examples of resource sharing exist and there is some ability to utilize resources outside of the campus via partner campuses or the open science grid. Some documentation and local personnel support for campus grid users.
4 Campus wide organization or visibility of campus grids initiatives. Widespread sharing of on and off campus resources. At least part time dedicated personnel support and some documentation for campus grid users.
5 Campus grids are a 'way of life' for campus researchers, with on and off campus resources sharing the default. Mature user-facing documentation and dedicated personnel support for campus grid users.

Deployed Campus HTC Infrastructures

Center for High Throughput Computing (CHTC) -- UW Madison

Contacts: Miron Livny (PI), Brooklin Gore (Campus Champion), Dan Bradley (HEP, CMS, GlideIns?), Greg Thain (High Throughput Parallel Computing, Condor), Jaime Frey (Bosco, Grid Univerise, Condor)

Website: Center for HighThroughput Computing

CGMM Level: 4

Status: The CHTC at the University of Wisonsin in Madison, is a federation of about 15,000 CPU cores in roughly four collections or 'pools'. Condor, developed at the UW-Madison, is our default batch scheduler. The UW-Madison is also an OSG member operating as the GLOW Virtual Organization (VO). Our campus resources, coupled with the OSG deliver around 1.5 million CPU hours per week to campus researchers. We do not charge for access to these resources. We have a small team (~2 FTE's spread across ~6 distributed computing specialist) that work one on one with campus researchers to help them move their applications to a distributed HTC model. We typically have around 70 active research projects running on our resources at any given time.

Opportunities: While we consider our campus research computing capabilities as mature, we still have plenty of opportunities. We're currently working on 1) improving our local documentation to make it easier for researchers to help themselves; 2) continuing to provide more 'turn-key' solutions for researchers (like we've done for MatLab? and R) and 3) working closely with OSG technical staff to understand how to simplify and maximize our access to OSG resources.

DiaGrid

Profile coming soon...

Florida International University

Profile coming soon...

Holland Computing Center -- University of Nebraska

Contacts: David Swanson (Director), Adam Caprez (Application Specialist), Ashu Guru (Application Specialist), Derek Weitzel (Bosco, OSG) Brian Bockelman (CMS, OSG)

Website: Holland Computing Center

CGMM Level: 4

Status: HCC is the centralized center for research computing for the University of Nebraska system. HCC manages roughly 16,000 CPU cores in four clusters spread across two campuses, Lincoln and Omaha. The Holland Computing Center is a member of the OSG running as the hcc VO. We have a relatively large team of two application specialists, three grid specialists, and five system administrators that work together to deliver computing to users of HCC.

Opportunities: Our campus research computing is constantly evolving. New resource and new ideas are constantly changing how we use and administer our resources. We are working on improving our interaction with researchers to enable them to manage their computing. Additionally, we aim to provide portals for common applications that run on our campus grid.

Rosen Center for Advanced Computing (RCAC) - Purdue University

Contacts: Fengping Hu (HPC Systems Administrator), Technical Contacts, User Support,

Website: Rosen Center for Advanced Computing

CGMM Level: 4

Status: RCAC is the central resource for research computing at Purdue University. RCAC manages five large community clusters and several smaller, specialized clusters, totally approximately 50,000 cores. In addition, RCAC manages the Purdue Condor pools, which scavenge cycles from the clusters and distributed desktops in academic and administrative departments. Twelve systems administrators, eight user support/application specialists, and 16 students support the hardware and software infrastructure. In addition, the Scientific Solutions Group provides custom support for HPC/HTC-related projects, including the DiaGrid? hub portals.

Opportunities: With the explosion in scale, availability is beginning to outpace demand. Work is underway to engage new areas of research that don't historically make use of advanced computing resources. Another area of focus is the creation of resources targeted specifically at undergraduate instruction, in order to provide modern education to Purdue students and increase their competitiveness in industry.

University of Chicago (UC3)

Contacts: Rob Gardner (Lead), Marco Mambelli (OSG, Glidein Factory), Lincoln Bryant (HEP, ATLAS Tier 2, UC3 systems admin)

Website: UC3 - Project wiki, CycleServer console (login: guest guest)

CGMM Level: 2

Status: UC3 - UChicago Computing Cooperative - is a framework for building a sustainable distributed high throughput computing environment on campus. It leverages the facilities of the ATLAS Midwest Tier 2 and Tier 3 Centers, the University's Information Technology Services, the South Pole Telescope, and the Scientific Imaging and Reconstruction Facility. At present over 6500 cores are available.

Opportunities: A UC3 VO will be created. A VO-frontend service to GlideinWMS is planned to permit overflow to OSG opportunistic resources.

University of Florida (Sunshine Grid)

Virginia Tech

  • VBI -- Virginia Biotech Institute
  • Physics
Topic revision: r8 - 29 Jun 2012 - 14:01:34 - RobGardner
Hello, TWikiGuest
Register

 
TWIKI.NET

TWiki | Report Bugs | Privacy Policy

This site is powered by the TWiki collaboration platformCopyright by the contributing authors. All material on this collaboration platform is the property of the contributing authors..