OSG Newsletter, January 2013

Condor Name Change to HTCondor

As of October 2012, Condor software will now have a new name that reflects It’s connection to High Throughput Computing- HTCondor (pronounced "aitch-tee-condor"). Previously known as simply “Condor” from 1988 until October 2012, this name-change was required in order to resolve a lawsuit challenging the University of Wisconsin’s use of the “Condor” trademark. The letters at the start of the new name ("HT") derive from the software's primary objective: to enable high throughput computing, often abbreviated as HTC. This name change is now reflected within the software's web site, documentation, web URLs, email lists, and wiki. While the name of thesoftware is changing, nothing about the naming or usage of the command-linetools, APIs, environment variables, or source code will change. Portals, procedures, scripts, gateways, and other code built on top of the Condor software should not have to change when HTCondor is installed. OSG sites can safely upgrade from Condor to HTCondor without any impact upon OSG middleware operation.

~ Todd Tannenbaum

From the “Memories of a Product Manager” blog: Higgs Boson: Think HTC instead of HPC

bosco.png

You are invited to read my thoughts - that lead from HEP LHC computing through to the individual researcher on campus high throughput computing.

It includes a good framing of the HPC/HTC conversation. "HTC is about sustained, long-term computation. You might think the difference between sustained long-term computation and a short-term sprint is merely quantitative, but this difference really is a qualitative one. What HTC is, in essence, is sustained throughput over long times … Getting people to think in a high throughput way helps a lot. There are still many machine idle times that anyone can access for free, but, they are not HPC (High Performance Computing) resources. They may only be idle for an hour or two. If we have a single 10,000 hour-long job, it will never complete on the OSG. But if you are able to deploy the same task as a workflow of 10,000 one-hour jobs, you could finish in one day. Statistical and Monte Carlo techniques are often very applicable in HTC, and these are similar to the Higgs boson time-consuming stochastic modeling ."

~Miha Ahronowitz

CMS Update

In December, CMS completed a very successful 2012 proton-proton run. The last few months were particularly busy as the data-taking rate effectively increased by a factor of two. More than 5 billion physics events were processed at Tier 1 facilities throughout the globe, and more than double that were generated via Monte Carlo simulations. The Tier 1 and Tier 2's on OSG delivered more than 150 million HEP-SPEC06 hours in 2012. This month, we are taking data in a short run for heavy ion physics; after that, we will enter "Long Shutdown 1"- an 18-month period during which we will not take data, but will upgrade the Large Hadron Collider as well as hardware and software for the experiment. CMS Computing never rests, however, we're sure to keep the OSG busy as we crunch unprocessed data from the 2012 run and publish new and exciting results!

~Burt Holzman

Using SAGA on OSG for Bioinformatics

The SAGA Project, led by the Research in Advanced Distributed Cyberinfrastructure and Applications Laboratory (RADICAL) at Rutgers University, develops a lightweight, open source Python module (SAGA-Python) that provides computational job management and data transfer and replication capabilities via a unified programming interface (API). The interface has been standardized by the Open Grid Forum community as GFD.90.

The binding from API to distributed infrastructure is provided by adaptors (plug-ins) that interface with multiple distributed computing middleware back-ends. Adapters exist for most middleware stacks.

SAGA has been supported and used on many production distributed cyberinfrastructure, including XSEDE where SAGA usage accounted for more than 5M SUs ( core hours) in 2012, and EGI. In 2012, based upon demands of Gateway developers and other XSEDE users, as part of the NSF ExTENCI project, we developed adaptors for Condor and the Integrated Rule Oriented Data System (iRODS), as a mechanism to support standards-based interoperability between XSEDE and OSG. Using iRODS as a dynamic caching capability for BigJob? -- a SAGA-based pilot job implementation, we have shown that the integration of these new capabilities allows transparent advanced data-placement capabilities for data-intensive workloads (such as short-read sequence alignment) on OSG. The SAGA project is currently integrating its new capabilities with science gateways, to allow concurrent cross-site utilization of OSG with XSEDE and EGI resources.

~ Ole Weidner, Shantenu Jha

Derek’s Blog – Improving Gratia’s Web Interface

Over the winter break, I worked on improving the interface that most users use for OSG accounting. When I returned from break, I worked with Ashu to integrate the changes with some recent changes he had made. The new interface runs on gratiaweb-itb. The source for the new web page is hosted on github. The first thing users will notice is the newly designed interface:

open_science_grid_accounting.jpg

New OSG Accounting Interface

The updated interface brings the style of the website inline with that of MyOSG and OIM (or close). The design stayed close to the original, but the menu on the right has changed significantly. First is the style of the menu. But we also added a new category, Campus and Pilot View. In the Campus and Pilot View, we have some new graphs that show usage by GlideinWMS?, Campus users, XSEDE users, and in the future, Bosco users.

Lets run through a quick example: In this example, lets assume I'm a VO manager and want to see where my VO is running, how many hours, and who is running.

1. Select the Pilot & Campus Accounting link.

2. Scroll to the bottom, to the Refine View.

3. Enter your VO name into the VO text box and hit enter.

This will pull up the custom page that shows usage for only your VO. For example, if I look at the osg VO:

open_science_grid_accounting_2.jpg

Usage by the OSG VO.

You can see from the graphs that the OSG VO has used ~80,000 CPU hours a day on the OSG. Also, they are running at over 20 sites. The sites at the bottom of the graph are listed in order of total hours (I am happy to see Nebraska resources as #3, #6, and #9). You can also see, from the graph, that usage at sites depends on the day. Some days they get significant usage at the MWT2 (UChicago and IU), and some days they run a lot at Nebraska. The new usage graphs are intended to help users, administrators, and VO managers view their usage. I hope you find them as useful as we have in the past.

We hope that webpage is an improvement. If there are any comments on further improvements, we are interested in your feedback.

~ Derek Weitzel

Thoughts from OSG Communications

If you missed the first Campus Infrastructure Community webinar, you can still catch it on YouTube? - Dan Bradley on CVMFS, Software Access Anywhere.

Coming next month is a full description of one community’s quick ramp up for a "just in time production" run before Christmas. Below is the profile of their usage as a taste.

profile_of_usage.jpg

Our satellite communications e-publication iSGTW is always looking for articles about science and research on distributed computing. Please do contact the new US editor Amber Harmon with ideas and to talk!

Amber_HarmonJPG.JPG

Do catch the ongoing blog posts .

~ Ruth Pordes , OSG Communications

-- KimberlyMyles - 05 Mar 2013

Topic attachments
I Attachment Action Size Date Who Comment
jpgJPG Amber_HarmonJPG.JPG manage 10.3 K 05 Mar 2013 - 19:41 KimberlyMyles  
pngpng bosco.png manage 32.2 K 05 Mar 2013 - 19:00 KimberlyMyles  
jpgjpg open_science_grid_accounting.jpg manage 13.5 K 05 Mar 2013 - 19:13 KimberlyMyles  
jpgjpg open_science_grid_accounting_2.jpg manage 13.4 K 05 Mar 2013 - 19:15 KimberlyMyles  
jpgjpg profile_of_usage.jpg manage 4.0 K 05 Mar 2013 - 19:17 KimberlyMyles  
Topic revision: r1 - 05 Mar 2013 - 19:43:29 - KimberlyMyles
Hello, TWikiGuest
Register

 
TWIKI.NET

TWiki | Report Bugs | Privacy Policy

This site is powered by the TWiki collaboration platformCopyright by the contributing authors. All material on this collaboration platform is the property of the contributing authors..