SC|05 SC|05 Gateway to Discovery
About Interactive Schedule Programs Registration Exhibits Initiatives & Challenges News & Press Hotel & Travel



StorCloud Applications make use of StorCloud's unprecedented heterogeneous "petabyte-scale" storage farm to enable showcase applications for demonstration purposes at SC|05.

Interactive Exploration and Analysis of Terabyte-scale Biomedical Image Datasets

Team Leads:
Joel Saltz, Ohio State University
Tahsin Kurc, Ohio State University

The usefulness of imaging in biomedical research is constantly improving with the help of advanced imaging technologies. In collaboration with biomedical researchers, we are developing image analysis applications and runtime support for terabyte-scale image datasets. These datasets include time-dependent MR images and very large digitized microscopy slides. The analysis applications involve sequences of data- and compute-intensive operations on images to extract features and reconstruct 3D/4D volumes. The runtime system can support these operations in a heterogeneous environment and has been designed to take advantage of large, distributed storage and high-bandwidth interconnects. In this User Application, we will demonstrate the use of StorCloud to enable interactive exploration of very large MRI and digitized microscopy images. In our application, each microscopy image can be 100 GBs in size and a dataset can consist of hundreds of such images. The applications require access to subsets of image data from multiple such image datasets.

Additional Information:
http://www.bmi.osu.edu/areas_and_projects/gemiac.cfm


Simultaneous Visualization of Simulated Combustion Data

Team Leads:
Jackie Chen, Sandia National Laboratories
Kwan-Liu Ma, UC Davis

High-fidelity direct numerical simulation of turbulent, non-premixed flame presents tremendous challenges to subsequent data visualization and analysis. To understand the dynamic mechanisms of extinction and re-ignition in turbulent flames, interactive visualization is helpful. In this demonstration, we present simultaneous visualization strategies using a suite of advanced visualization techniques to solve the fundamental problem of visualizing two or more variables simultaneous for improved understanding and validation.

This visualization capability will be demonstrated at the NNSA/ASC booth using eight visualization servers and the booth power wall. We plan to store our data on DDN's IB-Storage as well as Intransa's iSCSI-Storage provided by StorCloud. They will be accessed via four IB and four 10GE links via four Lustre OST's located at the ASC booth. Under this scenario, all Lustre file -level I/O's will use the IB fabric inside the ASC booth, and the block-level I/O's the inter-booth IB and 10GE links.


GAMESS I/O over InfiniBand

Team Leads:
Troy Benjegerdes, Ames Laboratory - Scalable Computing Laboratory
Brett Bode, Ames Laboratory - Scalable Computing Laboratory

GAMESS is a popular computational chemistry application in which certain problem sizes make heavy use of disk-based storage. Our goal is to attempt to demonstrate a real GAMESS job running on a single node which can utilize the full bandwidth of a 4X InfiniBand connection (8 Gigabits per second).

Additional Information:
http://www.msg.ameslab.gov/GAMESS/GAMESS.html


Next-generation hybrid network and grid application development for LHC (Large Hadron Collider) experiment and SC|05 conference

Team Leads:
Harvey Newman, California Institute of Technology
Julian Bunn, California Institute of Technology

Caltech is a major participant in the Compact Muon Solenoid collaboration for CERN's LHC. We have been designing and building state of the art WAN infrastructure based on the LHC Tiered architecture that supports a Grid-based system of physics Web Services. In SC05 we intend to use multiple 10GE waves and StorCloud disk storage to demonstrate this type of Grid Web Services to be used in LHC experiments and improve network and disk to disk transfer performance over 10GE WAN link. We plan to use ~12 10GE waves to interconnect SC05 conference site to CERN, Caltech and other domestic and International partner Grid Service sites, e.g. UKlight, UERJ, FNAL, AARnet, and demostrate the equipment, software, networks design and methods for transferring large scientific datasets disk to disk at high speed around the globe.

Additional Information:
http://www-iepm.slac.stanford.edu/monitoring/bulk/sc2005/hiperf.html


National Center for Data Mining

Team Lead:
Robert Grossman, University of Illinois at Chicago

National Center for Data Mining will deploy Teraflow Data Services and its application layer DSTP web services with StorCloud as the storage back end. This capability will allow us to demonstrate proper operation of Teraflow Data Services and DSTP with datasets of multi terabyte size.

Additional Information:
http://www.teraflowtestbed.net


Real-time Visualization of a Scientific Simulation using ViSUS
Team Leads:
Holger Jones, LLNL
Eric Brugger, LLNL

ViSUS is a distributed framework for real-time streaming and visualization of large datasets generated by scientific simulations. ViSUS uses progressive rendering algorithms and parallel/hierarchical data streaming techniques to reduce the latency between the simulation and its attached visualization and analysis tools. ViSUS is designed to compute a fast permutation of data in the form of a hierarchical space-filling curve in place. This framework is scalable in that data-streaming can be dynamically tailored to match the available I/O and network resources, as well as the desired visual fidelity. Because it is inherently cache-oblivious, it is excellent for checkpoint/restart that involves differing node counts. As such, ViSUS is an effective checkpoint/restart and real-time data analysis tool to researchers. At SC05, we will demonstrate ViSUS visualizing the data from a simulation of turbulent mixing of fluids run on one of LLNL's large clusters.


Visualization of High Resolution Simulation of A Compressible, Turbulent Flow

Team Leads:
Parks Fields, LANL
Paul Woodward, LCSE, U of MN

Our movie demonstrates the development of a multi-fluid mixing layer under the action of a shear-instability, with air blowing over a region of sulfur hexafluoride at Mach 0.2 and SF6 in the opposite direction at Mach 0.5. This shear has a sine function velocity, and a mixture profile with an initial thickness of 16.3 cells. The Vertical velocity perturbations in the fundamental mode and its 11th harmonic cause the Kelvin-Helmholtz instability to bring about the mixing of the two fluids, allowing our turbulence model to predict the energy transfer from the small-scale turbulence on the near side to the larger-scale flow on the far side. We used this simulation to test our new multi-fluid PPM model to track the multi-fluid interface at the sub-grid structure of the fractional volume variable. A detailed performance study of the multi-fluid PPM code can be found at http://www.lcse.umn.edu.


Data Intensive Computing Showcase

Team Lead:
Gregory Newby, ARSC

This StorCloud submission presents several different viewpoints and applications for data intensive computing. They include Grid-based information retrieval, massive time series of remote sensing data, simulation visualization of tsunami, and personalized data sub-collections.