Network Group Works to Improve Internet

The Internet has become an important and integral part of our daily lives, by and large, making our lives simpler and better. Sometimes, however, we are frustrated by it: files can take forever to download, the quality of voice communication leaves much to be desired, and we still cannot watch our favorite shows on the Internet. These experiences leave us wondering why the Internet exhibits such great performance/quality variations, and why it still cannot provide all the services we want.

The Computer Networking and Multimedia Research Group, led by Professors David Du and Zhi-Li Zhang, is working to address some of these issues with the ultimate goal being to exploit the full potential of the Internet. One of their research directions is to look at the problem of delivering value-added services over the Internet. By value-added services, they mean services that are over and above the communication primitives available today, such as, high-bandwidth subscriber services (video streaming), real-time communication services (voice over IP, videoconferencing), and integrated communication services. Other research projects seek to improve the robustness of the Internet so that it will better support applications with Quality of Service (QoS) requirements.

In its current form the Internet lacks essential properties required for delivering these services effectively. One such deficiency is the limited support for communication primitives. The Internet was designed to provide a best-effort connectivity service. There is no guarantee that any packet will be delivered by a fixed time or even that it will be delivered at all. When a user has a session with a remote server, the train of packets that flow from the user to the server transit a number of routers. At each router, all packets are given the same equal treatment. The immediate consequence of the current Internet architecture is that the Internet cannot support applications with quality of service (QoS) requirements.

To further illustrate some of the issues and challenges facing the Internet it helps to understand how the underlying networks are pieced together to form the global Internet. The Internet "fabric" is, in a sense, two-tiered. At the lower level of abstraction, the Internet is a massive collection of routers that cooperate to carry traffic from one point to another point. At the higher level of abstraction, these routers are grouped into "network domains" (or in the Internet routing jargon, Autonomous Systems), each of which is owned and managed by a distinct administrative entity. As a consequence, within each network domain things run pretty smoothly. However, the interconnection between these domains is a completely different matter. The interesting aspect of these interconnections is that the path of traffic from one network domain to another network domain is based more on commercial arrangements than geography. For example, if you reside in the Twin Cities and subscribe to a broadband service from Time Warner, traffic from your PC to the University computers goes through Denver or Los Angeles, depending on the direction. The interconnection between these entities is unregulated and sometimes not visible externally. The other artifact of this arrangement is the "too many cooks spoiling the pot" syndrome. Since there is no centralized authority that governs the operation of these domains, there is a lot of instability that is caused by the day to day operation of these domains -- and this makes the "whole" Internet unpredictable. Moreover, the peering points (the interconnections between network domains), which are not under the authority of either side, have been anecdotally blamed as being the bottleneck.

In this particular area, the group's research is aimed at attempting to understand the large-scale structural properties of the Internet. Given the size and heterogeneity of the Internet, we may as well treat it as a black box. Events that occur far away serve as inputs to the black box, and the events that we observe locally are considered to be the outputs. As the event is being propagated from input to output there are both amplifying and damping factors at work, which depend on the structure of the graph inside the black box. Thus knowledge of the structural properties will provide insight into modeling the behavior of the Internet.

A related area of interest is in developing techniques to monitor the Internet so that we can assess the "health" of the Internet. The non-cooperating nature of commercial agreements between network domains has the effect of obscuring information, and it happens very often that observations made at a certain point are very loosely coupled with the actual events that trigger changes. The network group is exploring ways in which a distributed set of monitoring entities can be used to stitch together locally observed events to produce a "composite" picture of the actual event or events. They hope their research in this direction will help in developing algorithms that improve the robustness of the Internet.


Pictured from left to right, front row: Chang-Ho Choi, David Du, Zhi-Li Zhang, Ewa Kusmierek; second row: James Beyer, Kuai Xu, Joseph Hong, Esam Sharafuddin, Dingshan He, Yinzhe Yu; last row: Yingfei Dong, Guor-Huar Lu, Sanghwan Lee, Jeff Krasky, Jaideep Chandrashekar, Srivatsan Varadarajan

However, making the underlying network well-behaved does not in itself solve the problem. Supporting QoS and enabling new applications require support and cooperation from the underlying network providers. One of the network group's more significant research initiatives toward addressing this issue is the Service Overlay Network (SON) architecture. In this comprehensive approach, they are investigating the feasibility of enabling service delivery by utilizing virtual service specific overlay networks or "clouds". The overlay networks are tied to the underlying network domains by "service gateways". Service requests are quickly passed to the appropriate service cloud. Within the cloud, the service provider can ensure that QoS requirements are met by means of negotiating Service Level Agreements (SLA) with the network domains that it spans. Since network providers now stand to profit from these agreements, they have a strong economic incentive to respect the agreements. One of the key advantages of this architecture is that it allows the overlays to bypass the peering points among the network domains, and thus avoids the potential performance problems associated with them. Relying on the bilateral SLAs the SON can deliver end-to-end QoS-sensitive services to its users via appropriate provisioning and resource management.

The Internet offers far more possibilities than what we have seen or imagined so far. One major effort they are undertaking is to develop a large-scale, Internet wide, intelligent storage system. By connecting various increasingly smart storage devices to the Internet, they could turn the Internet into a global information repository and vast storage system. Furthermore this intelligent storage system will allow you to access files and documents in the same manner, no matter where you are and what computer system you are using. With such a system in place, you won't have to kick yourself for forgetting to copy that important document you need to your laptop before your travel, or for forgetting to synchronize it back to your desktop in your office when you return from your trip. All you will need to do is to plug your laptop into the Internet, and you will be able to access the document as if it were in your local disk drive. To realize this vision, a research consortium on Intelligent Storage Systems (DISC) has been established at the University's Digital Technology Center (DTC). Led by Professor David Du, this consortium involves several faculty members (Professors Yongdae Kim, David Lilja, Ahmed Tewfik, Jon Weissman and Zhi-Li Zhang) as well as graduate students in the DTC. Industrial partners including StorageTek, IBM, Seagate, Intel, Cisco and EMC have either joined the endeavor or expressed interests in supporting the research activities in DISC. The assembled team is working on many challenges in building such a large-scale, intelligent storage system. Based on the OSD (Object Storage Device) model, they are currently developing a two-tier, self-organizing system architecture to support various capabilities required of the envisioned intelligent storage system: fast search and retrieval, data migration and consistency, user and device mobility, QoS, and security.

A global intelligent storage system is just one of many possibilities that the Internet can offer. Another exciting service, dubbed "grid computing", leverages all the computers connected to the Internet and turns the Internet into a giant computer. As a giant computing grid, the Internet will offer various value-added computing services on-demand, and enable geographically dispersed users to seamlessly perform computing and collaborate over the Internet. Towards this goal, various software tools and utilities, collectively referred to as middleware, must be integrated into the Internet. Working with other researchers, they are investigating a number of research issues in this area. Two types of novel middleware systems with emphasis on collaborative experimentation are under development. Professor Du is part of a research team, led by Prof. George E. Brown, Jr., that is working on an NSF-funded project in the Network for Earthquake Engineering Simulation (NEES), called the Multi-Axial Subassemblage Testing (MAST) system. The MAST system will be used to test three-dimensional components of large-scale building and bridge structures that are subjected to cyclic loading to investigate the integrity of new and existing structural systems subjected to earthquakes. The objective is to allow researchers to remotely access the system and conduct collaborative experiments over the distance.

-J. Chandrashekar, Z. Duan,
Z-L Zhang and David Du