Front Page

Previous Story

Next Story

NIH Record

Scientists, Start Your Engines
Ultra-Swift Internet2 Connection Now Available at NIH

By Carla Garnett

On the Front Page...

NIH recently opened a new on-ramp to the next generation Internet (NGI) via a high-speed (155 Mbps, or millions of bits per second) connection to the very high performance Backbone Network Service (vBNS). Launched in 1995, the vBNS is a nationwide network supporting high-performance, high-bandwidth research applications and is the product of a 5-year cooperative agreement between MCI and the National Science Foundation.


The vBNS was designed for the scientific and research communities and originally provided high-speed interconnection among NSF supercomputing centers and connection to NSF-specified network access points. Currently, the vBNS connects NSF supercomputing centers and research institutions that are selected under the NSF's high-performance connections program.

Speed It Up a Little More

Paving the way for the new ramp took a bit longer than anticipated, according to the NLM team that helped broker the various agreements to get vBNS access for NIH.

"NIH's application broke ground in that we were the first government site to go through the process of applying to become attached to the vBNS as just a site," explains team member Mike Gill, a network engineer with the Communications Engineering Branch of NLM's Lister Hill National Center for Biomedical Communications. Various federal networks including the Department of Energy's ESNET and NASA's NREN already connect to the vBNS, but NIH was the first federal site that is not also a nationwide network.

At NLM, (from l) Jules Aronson, Victor Cid and Mike Gill are reviewing preliminary vBNS performance testing results.

One requirement of the application was that NIH needed to be invited to join by a university. Dr. Richard Ewing, dean of science at Texas A&M University, extended the invitation.

Future frontiers for the enhanced connectivity environment of the vBNS are limited only by the creativity of its users. Gill and other team members hope the new capability will spur application development (see sidebar, Putting Speed to Good Use). Also, an evaluation of access to multimedia (text and digital x-ray) databases is under way.

The vBNS is already being used — with great success — to access NLM databases, although the degree of success depends on the connection at the endpoints, known as the "last mile" problem, and the computer and network setup at both ends of the connection.

"As fast as the vBNS is, it can be limited just like the 'regular' Internet in this way," Gill said. "However, with institutions also equipped with 155 Mbps access to the vBNS, we can expect higher transfer rates. Things that were not practical before are now practical."

Joe Mambretti, director of the International Center for Advanced Internet Research at Northwestern University, agrees the possibilities generated by the increase in speed are unlimited. "This could be the subject of a very large paper," he says. "One example may be digital video. The traditional Internet does not do video well. It is essentially a text and image medium. The video that is there consists of small, grainy, jittery images. With our current next generation infrastructure, we can do full-motion, full-color, full-screen video with CD-quality audio. Increasingly, we are seeing demos of very high resolution images also. The more information in the image, the higher the quality and resolution. [More and more] these networks are used for 3D imaging. Also, with the advanced network it is possible to access large amounts of research data, for example, for longterm studies of numerous medical records."

How're We Doing?

A study is planned for measuring the response time of accessing NLM information from remote vBNS locations and collecting other performance data about the network paths between NLM and the remote sites, according to Gill. The work is being done in two phases: phase I, which involves a limited number of vBNS sites, is designed to implement and test the tools and methodologies needed. Phase II will attempt to collect more extensive data among a larger group of institutions that may represent the biomedical community in vBNS.

"We are measuring a number of network performance parameters between remote sites in vBNS and our web servers and other hosts at NLM," explains Victor Cid, an NLM visiting scientist from the University of Chile in Santiago who is coordinating the evaluation. "Performance measurements are currently being obtained from computers at Yale University, University of Maryland at College Park, University of Washington, University of Illinois at Chicago, University of Southern California, Texas A&M University and UCLA. In the future we may be also testing from other locations. One set of experiments measures the time it takes to access web pages from our web servers at NLM and retrieve some large data sets. We are also measuring a number of other technical parameters such as network delay, communications anomalies, maximum data throughput, performance variation over time, and so on. We are running similar tests from computers at NLM to the same remote locations to study network asymmetries."

Additionally, they are using performance data obtained through other metric efforts within vBNS and the Internet, such as the Advanced Network & Services' Surveyor project, he says. A special network setup will allow them to perform the same set of experiments through both the vBNS and the current Internet (called the commodity Internet).

Senior Systems Scientist Jim Seamans uses the kind of large image file applications for which NIH's new access ramp to the vBNS is ideal.

"We expect to obtain data that will give us a reasonable estimation of the performance perceived by current NLM users when they access our information services through vBNS. The data will also allow us to explore some overall performance characteristics and some of the potentials of this high-bandwidth network."

The benefit of such testing is two-fold, Cid points out. Not only will it tell developers how well the new on-ramp is working, but it may also reveal new ways to get the most out of both the vBNS and the commodity Internet. In addition, establishing vBNS connections here keeps NIH on the cutting edge of high-speed computer communication.

"vBNS is an experimental network, a 'test-bed' to study and develop future networked applications and new network technologies," Cid concludes. "The current Internet has been often plagued with communications delays and other problems. What we learn from vBNS may help us improve the Internet and teach us how to use it better. NLM and NIH have become very dependent on the Internet for different scientific and non-scientific purposes. NIH's participation on vBNS is a natural step towards an increased involvement of our community on the evolution of Internet and other related communication technologies."

Deceptively Fast: Comparing Potential Speed to Practical Speed

Although the new vBNS connection offers the potential to transfer data at 155 Mbps, actual transfer speeds can and will vary widely, depending on several other conditions, explains NLM visiting scientist Victor Cid, who is conducting evaluations of NIH's new high-speed on-ramp to the Next Generation Internet (Internet2). Users need to consider all conditions when attempting to calculate their own actual transmission rates, he cautions.

"155 Mbps is the speed of the link between NIH and vBNS," Cid says. "The end-to-end speed between two computers communicating through vBNS can be different. We are certainly limited by the speed of our link to vBNS, but there are many other factors that make it practically impossible for an application to reach that speed. For example, the speed of my LAN at NLM is only 100 Mbps; my workstation is fast but my communications software is not optimized for high-bandwidth networks, etc. The applications themselves (for example, Web servers) can be bottlenecks in the end-to-end communications. We will compare the end-to-end performance measured through vBNS with the performance measured through the commodity [regular] Internet from different types of computer platforms. It will also be possible to compare the measured speeds with the theoretical maximum speeds of the network paths (the speed of the slowest link on the network path)."

Putting Speed to Good Use

NIH's new speedy connection to NGI is limited only by how creative application developers are. One group has already used the new access ramp to solve a problem common in what is called "volume rendering"— imaging techniques that help users visualize 3-dimensional data.

"Volume rendering of medical data produces accurate, highly detailed images of internal anatomy not available by other means," says Greg Johnson, associate staff programmer analyst at San Diego Supercomputer Center, University of California, San Diego. "Such imagery is useful for both diagnosis and education, and animation of these images is often critical to the analysis — to determine the size and orientation of key anatomical features relative to one another, for example." However, he points out, high resolution volume data — such as that from the Visible Human Project at the National Library of Medicine — exceed the computer processing unit and random access memory limitations of current workstation technology. Such systems are unable to generate images at the rates necessary to sustain an effective exploration of the subject under study.

But Johnson and his colleagues at SDSC have come up with a solution. They have developed a system that combines advances in high performance computing, Web programming and network connectivity.

"Called the Massively Parallel Interactive Rendering Environment (MPIRE), the system can render multi-gigabyte volume datasets at near-interactive rates and deliver the results to any desktop computer equipped with a Web browser," he explains. MPIRE consists of two major components: a set of software engines for performing the rendering calculations and a graphical user interface in the form of a Java applet.

In a typical MPIRE session, the user loads a Web page containing the applet, and configures the desired rendering parameters and high-performance computing (HPC) system. An MPIRE engine is then automatically started on the HPC platform, the data loaded from disk local to the host, and an image created and sent back to the applet. From there the image is automatically updated as the user modifies the camera position, lighting, coloration or other rendering parameter.

Johnson predicts that distributed computing applications of the not-too-distant future are likely to go one step further. Data from an archive at one location would be automatically retrieved on demand and processed by a program running on a computer at another site. The data would then be controlled entirely via a graphical user interface running on the user's local computer at yet a third location.

"Data is seamlessly obtained, processed and delivered using multiple physically distributed discreet resources," Johnson points out. "At this point the user may no longer be aware — or even care — about the nature or location of these resources. They could be anything from simple tape storage arrays, to complex, multi-processor supercomputers situated in another room, another building or even another country. Here we find ourselves entering the next period of the Computer Age, an era of transparent computing."

Transparency comes in part by providing uniform access to the remote resources, he concludes. "The World Wide Web is an excellent example of this principle in action. Here users move from Web page to Web page through the consistent user interface offered by the browser, with little knowledge of the types or locations of the serving computers. For the interface to be effective, however, it must be backed by high-bandwidth, low-latency wide area networks of the type represented by the NLM's connection to the vBNS."

Up to Top