HSIEH LAYOUT

3/17/05

12:45 PM

Page 114

ACCEPTED FROM OPEN CALL

Parallel Transport: A New Transport Layer Paradigm for Enabling Internet Quality of Service Hung-Yun Hsieh and Raghupathy Sivakumar

ABSTRACT The transport layer in the network protocol stack serves as a liaison between the application and the underlying network. Any quality of service provided by the network thus has to be effectively translated by the transport layer protocol in order to be enjoyed by the applications. In this article, we argue for a fundamental rethinking of the transport layer design to facilitate such QoS delivery. We identify the key requirement for a QoS enabling transport layer protocol as the ability to effectively handle multiplicity in terms of user differentiation levels, network resources, and service models. However, TCP, the transport layer protocol predominantly used in the Internet, is unable to support such multiplicity due to its single-state design. We extend TCP to a parallel transport layer protocol called parallel TCP (pTCP) that can tackle the different dimensions of multiplicity, and hence enable varying classes of QoS to applications. We discuss the applicability of pTCP in three specific domains with different levels of network support for QoS, and present simulation results substantiating our arguments.

INTRODUCTION

1

It has to be noted that while the three scenarios provide QoS with progressive levels of network support (from low to high), the scenarios themselves are not exclusive, and all three can coexist in the future QoS-enabled Internet.

114

The Internet is increasingly seen as an infrastructure that will lead to digital convergence, supporting a diverse set of applications including streaming media, videoconferencing, digital telephony, and non-real-time data transfer. The convergence has in turn resulted in the increasing focus on providing applications with quality of service (QoS) assurances that cannot be provided by the best effort service model of the Internet. The integrated services (IntServ) and differentiated services (DiffServ) architectures standardized by the Internet Engineering Task Force (TETF) are examples of approaches that can provide QoS assurances to applications. Notwithstanding the rapid advances made in the nature of QoS offered by the network, there still remain several key obstacles in the path toward enabling applications to enjoy the QoS. One such obstacle is the ability of the transport

0163-6804/05/$20.00 © 2005 IEEE

layer protocol, which acts as a liaison between the network and the application, to intelligently translate the QoS offered by the network for use by the application. The transmission control protocol (TCP) used by a majority of the current Internet applications [1] was not designed for a QoS-enabled environment, and hence is limited in its ability to perform the aforementioned translation. The limitations of TCP in delivering QoS to applications are clearly shown by the following three scenarios providing varying classes of QoS in the Internet.1 Multiple differentiation levels: The simplest form of QoS that can be provided to applications is relative service differentiation. While the current best effort service model does not differentiate between flows belonging to different users, one that can provide weighted service differentiation for users to choose “weights” and control the portion of service accorded to them can have several advantages and applications [2, 3]. The key issue, however, is whether a transport layer protocol can appropriately use any weighted service differentiation provided by the network, or better still enable such weighted service differentiation with minimal or no network support. The performance of TCP is solely determined by the round-trip time and loss rate of the path it traverses, so it does not support the notion of multiple differentiation levels. Multiple network resources: The difficulties in changing the Internet substrate for supporting QoS have prompted the use of so-called overlay networks for deploying new network services and protocols. A considerable amount of effort has in particular focused on addressing the suboptimality of Internet routing, and proposed different approaches for providing better QoS with QoS-aware routing and multipath routing [4, 5]. While the availability of multiple paths allows hosts participating in the overlay networks to potentially enjoy better QoS that a single path cannot provide, TCP, due to its assumption of first-in first-out delivery, cannot leverage such benefits when operating across multiple paths [6]. Multiple service models: In networks with the required infrastructure to support a variety of service models, it is possible that an application

IEEE Communications Magazine • April 2005

Authorized licensed use limited to: UNIVERSITY PUTRA MALAYSIA. Downloaded on April 7, 2009 at 03:29 from IEEE Xplore. Restrictions apply.

HSIEH LAYOUT

3/17/05

12:45 PM

Page 115

may subscribe to multiple services provided by the network. For example, while subscribing to assured service with the minimum bandwidth guarantee, the application at the same time can still benefit from using best effort service for more cost-effective operation. Hence, a transport layer protocol should be able to accommodate multiple service models a single application might subscribe to. TCP, designed for use with the best-effort service, however cannot support the use of multiple service models [7]. In this article we consider the question: How should the transport layer protocol for the Internet evolve to effectively deliver any available QoS to applications? In answering the question, we identify the multiplicity of user differentiation levels, network resources, and service models as an important characteristic of the environment that needs to be handled by the transport layer protocol. To design a QoS-enabling transport layer protocol, we introduce a new parallel transport paradigm that can leverage and handle the different dimensions of multiplicity. Specifically, we present a parallel instantiation of TCP called parallel TCP (pTCP) that can effectively deliver QoS to applications. At a high level, pTCP supports the same application programming interface (API) and follows the same semantics as TCP in terms of reliable and in-sequence delivery. However, pTCP allows parallel streams to be used between the source and destination in the connection. The individual streams themselves may traverse one or multiple physical paths using one or multiple service models. pTCP, through a unique decoupling of the data plane from the control plane, achieves the nontrivial challenge of effectively delivering to the application the aggregate of the bandwidths belonging to the individual streams. We show through simulations that the ability of pTCP to effectively support parallelization of connections enables applications to enjoy better QoS in the three QoS domains outlined earlier. The rest of the article is organized as follows. First, we substantiate the argument that TCP cannot effectively deliver the desired QoS to applications. We then present the design principles, software architecture, and protocol overview of pTCP. Finally, we discuss the use and evaluate the performance of pTCP in three QoS domains, and conclude the article.

WHY NOT TCP? As we explained earlier, a QoS-enabling transport layer protocol needs to handle the different dimensions of multiplicity in terms of differentiation levels, network resources, and service models. In this section we capture the essence of the three QoS domains using a single abstraction, and explain that the reason TCP cannot translate the network QoS for use by the application is its inability to achieve the desired performance in the abstraction. A common theme of the three QoS domains when compared to non-QoS scenarios is the need to use multiple units of resources (i.e., bandwidth fair share, end-to-end path, and service model) and achieve aggregate performance. To enjoy the desired QoS in the three domains,

an abstraction can be made for a connection to stripe across multiple data streams, where each stream may or may not belong to the same physical path and service model. Essentially, the number of streams used for a connection will be equal to the multiplicity of resources under consideration: • Approaches to achieve weighted service differentiation can use w streams traversing the same path for a connection with weight w, thus obtaining w times the performance of a default (unit weight) connection. • Approaches to operate over multiple paths can use one stream for each path traversed, thus achieving the aggregate of throughputs available along individual paths. • Approaches to use multiple service models can allocate one stream for each type of service to which the application subscribes, thus simultaneously utilizing the services with different characteristics. The reason TCP cannot deliver the desired QoS to the application is its lack of support for multiple streams in a connection.2 TCP is a single-state transport protocol that maintains the characteristics of the path it traverses such as the bandwidth (congestion window) and latency (round-trip time) in the form of transmission control block (TCB) variables. All packets transmitted or received in the connection are processed through the same set of TCB variables, triggering updates of the congestion window, round-trip time, and hence data rate achievable. The lack of support for multiple pipes (streams) prevents TCP not only from achieving throughput higher than the fair share of a single pipe, but also from aggregating the bandwidths available along multiple paths. If a TCP connection operates over multiple paths, packets traversing different paths may experience mismatched latencies and arrive at the destination out of order. The congestion control mechanism in TCP, however, expects predominantly first-in first-out delivery of packets. Persistent packet reordering will hence result in TCP unnecessarily cutting down its congestion window and underutilizing the available bandwidth. While several approaches have been proposed to make TCP more robust to packet reordering [9], they still suffer from substantial performance degradation especially when the loss rate is nonnegligible (greater than 1 percent). Note that lower-layer approaches that aim to hide the packet reordering from TCP [10, 11] are not applicable in the target scenarios involving multiple hops and multiple autonomous domains. A seemingly better solution to avoid packet reordering without changing TCP is to use an application layer striping approach that maintains multiple TCP sockets, one each for every path available in the connection [12, 13]. In application layer striping, the sender stripes across multiple sockets according to the bandwidth estimates of each path, and the receiver performs resequencing of packets received from multiple sockets. Such an application level approach, however, is not desirable since it increases the complexity at the application. More important, it can suffer from performance degradation due to the sending application fail-

IEEE Communications Magazine • April 2005 Authorized licensed use limited to: UNIVERSITY PUTRA MALAYSIA. Downloaded on April 7, 2009 at 03:29 from IEEE Xplore. Restrictions apply.

A common theme of the three QoS domains when compared to non-QoS scenarios is the need to use multiple units of resources (i.e., bandwidth fair share, end-to-end path, and service model) and achieve the aggregate performance.

2

We note that Stream Control Transmission Protocol (SCTP) [8] also has the notion of multiple streams in a connection. However, SCTP does not provide sequenced delivery across multiple streams. On the other hand, in our abstraction individual streams still need to be delivered to the application in a globally sequenced manner. For distinction, we use the term pipe in place of stream hereafter.

115

HSIEH LAYOUT

3/17/05

12:45 PM

Page 116

write

read open/close established/ closed receive send resume shrunk

Network Source

Application TCP-v1 TCP-v2 IP

Destination

6

5

3

TCP-v1

pTCP 8 7

4

2

TCP-v2

virtual send buffer virtual recv buffer

binding active pipes send buffer recv buffer

Application

1

pTCP 9

pTCP

TCP-v

ip-output

ptcp-recv

IP

n Figure 1. pTCP for striped connections. ing to accurately profile bandwidth available along each path. When striping at the application layer is disproportional to the actual sending rate at the transport layer, head-of-line blocking at the receiving application (with a finite receive buffer) can eventually cause the application to stop reading data from the TCP socket buffer, and slow down the faster path after the flow control mechanism in TCP takes place. The result is that the aggregate throughput the application can enjoy is throttled by the slowest path in the connection. We refer interested readers to [6] for a detailed presentation of the performance degradation due to head-ofline blocking in application striping approaches. In summary, the single-state design of TCP prevents it from maintaining multiple pipes for delivering the desired QoS to the application. While an application layer striping approach using multiple TCP sockets addresses out-oforder delivery by providing multiple pipes in the connection, it suffers from application complexity and suboptimal performance. In the following, we present how TCP can be extended to a parallel transport protocol with multistate design, hence enabling effective delivery of QoS to applications in the three QoS domains.

THE PTCP PROTOCOL In this section we first discuss how TCP can be extended to a parallel transport protocol called pTCP. We then present the software architecture of pTCP and its protocol operations that build atop TCP.

DESIGN PRINCIPLES Maintaining multiple states: Since multiple pipes in a connection can traverse different paths exhibiting mismatched bandwidths, roundtrip times, and loss rates, the first principle in designing a parallel transport protocol for supporting multiple pipes is to maintain multiple states in accordance with the multiplicity of resources used by the connection. Note that the multistate design is required even for the case of weighted differentiation where all pipes traverse the same physical path. Maintaining multiple states (e.g., congestion windows and retransmis-

116

sion timers) in accordance with the desired weight allows the connection to reduce bursty transmissions and the occurrences of timeouts that otherwise can limit the scalability of singlestate approaches as the weight increases [2]. Since each (TCB) state is associated with only one pipe, the mechanisms in TCP that were designed for a single state (e.g., congestion control) can be reused with minimal changes in the parallel transport protocol. Decoupling of functionalities: Application layer striping, as discussed earlier, is the simplest way to maintain multiple states in the connection. However, it suffers from significant overheads due to repetitive functionalities such as buffer management implementations of across multiple sockets. Therefore, to minimize overheads the parallel transport protocol should decouple functionalities associated with per-pipe characteristics from those that pertain to the aggregate connection. pTCP is thus designed as a wrapper around a slightly modified TCP called TCP-v (TCP-virtual), with the pTCP engine handling aggregate connection functionalities and TCP-v handling per-pipe functionalities. pTCP (engine) is the only component in the connection that maintains the socket buffer, while TCPv deals only with virtual buffers consisting of virtual packets (namely, packets that contain only the TCP header with the data portion stripped off). Figure 1 illustrates the roles of the two components in a pTCP connection. Congestion control, the mechanism that estimates the available bandwidth along a pipe, is obviously a per pipe functionality and is handled by TCP-v. Flow control, on the other hand, is the responsibility of pTCP since it maintains the socket buffer across multiple TCP-v pipes. Reliability in pTCP is primarily handled by TCP-v just as in TCP. However, under certain circumstances we describe later, pTCP can intervene and cause loss recovery to occur along a pipe different from the one where the loss occurred. Delayed binding: Since TCP-v has information about the available bandwidth along the pipe it traverses, pTCP can perform intelligent striping based on the available space of the congestion window in TCP-v. Note that TCP-v maintains only virtual packets without application

IEEE Communications Magazine • April 2005 Authorized licensed use limited to: UNIVERSITY PUTRA MALAYSIA. Downloaded on April 7, 2009 at 03:29 from IEEE Xplore. Restrictions apply.

HSIEH LAYOUT

3/17/05

12:45 PM

Page 117

data. When a packet is transmitted by TCP-v, the virtual packet should be bound to the real data in the pTCP socket buffer. The binding between the virtual packet and application data is performed only when TCP-v has space in its window and is ready to transmit. Such delayed binding coupled with TCP’s self-clocked property results in pTCP adapting the data striping ratio to instantaneous changes in the bandwidth and delay along individual TCP-v pipes. In other words, any packet bound to TCP-v will have already been transmitted or be transmitted immediately, precluding any chance of packets being held up in the buffer at the source to cause disproportional striping. An exception to this rule is when there is packet loss and the lost packet falls outside the reduced congestion window of the concerned TCP-v. In this case, the lost packet will be bound to a different TCP-v that has space in its congestion window for transmission. We describe how pTCP handles such a situation in the next design principle. Dynamic reassignment: In steady state, pTCP will ensure that the number of outstanding packets in a pipe is proportional to the bandwidth along the corresponding pipe estimated by the concerned TCP-v. The delayed binding strategy further ensures that all bound packets are already in transit in the network. However, during congestion or bandwidth fluctuations, the reduction of the congestion window in TCP-v can result in bound packets falling outside the congestion window. If such packets are lost in transit, then they will remain untransmitted until the congestion window of the concerned TCP-v expands beyond their sequence numbers. This can potentially result in head-of-line blocking at the receive socket buffer, finally blocking other active pipes and resulting in a connection stall. Therefore, to minimize head-of-line blocking in the parallel transport protocol, pTCP tracks the congestion window of each TCP-v, and unbinds packets that fall outside of the congestion window of TCP-v to which they were assigned. Such unbinding allows those packets to be reassigned to the next TCP-v that can send more data without potentially stalling the aggregate connection. Redundant striping: The restriping mechanism described earlier is equivalent to a move operation that shifts a packet from one TCP-v to another. However, note that the minimum size of the congestion window is one, and hence the packet that still remains inside the congestion window of a TCP-v pipe will not be restriped by pTCP to another pipe. Consider a scenario where a pipe has shut down completely (e.g., timeouts due to severe losses or path blackouts). In this case, the first packet in the congestion window of the concerned TCP-v will never be reassigned, thus potentially resulting in head-of-line blocking and connection stalls. pTCP handles this situation by redundantly striping the packet inside the congestion window through another pipe every time a timeout is experienced and the congestion window is reset. In other words, a copy operation is performed (the packet is carried by both pipes) as opposed to a move operation because the old TCP-v pipe needs at least one packet to probe for recovery from the shutdown. While such redundant striping may seem an overhead, it is

performed only during timeouts. The potential benefit of avoiding connection stalls can significantly outweigh such an overhead [6].

In steady state, pTCP will ensure

SOFTWARE ARCHITECTURE

that the number of

An architectural overview of the pTCP protocol along with its key data structures is illustrated in Fig. 1. Conceptually, pTCP (engine) functions as a wrapper around TCP-v, maintaining the socket buffers, and interfacing with both the higher and lower layers. Any packet from the application is queued onto the send buffer of pTCP awaiting transmission, while any packet from the IP layer is queued onto the recv buffer. Since packets are handled wholly within the pTCP engine, TCP-v merely operates on virtual packets and hence virtual buffers. The interactions between pTCP and TCP-v that are initiated by the former are open, close, receive, and resume; those initiated by the latter are established, closed, send, and shrunk. When pTCP receives a packet from IP, it processes the packet, strips the data, enqueues the data onto the receive buffer, and sends the skeletal packet (sans the data) to the appropriate TCP-v through the receive interface. When TCP-v receives the packet, it enqueues the packet onto its virtual buffer, updates its local state, and sends any required acknowledgment packets. If a TCP-v has space in its congestion window to send new data, it builds a regular TCP header based on its state variables and invokes the send function. pTCP on receiving the call binds the virtual packet to the next unsent data packet in sequence, maintains the binding in binding, and sends it out. TCP-v continues to invoke the send function until either there is no more space left in its congestion window, or it receives a FREEZE from pTCP that has no more new data to send. The concerned TCP-v then goes to sleep. pTCP maintains a list of pipes to which it has issued a FREEZE in active pipes. When new data is received from the application, pTCP uses the resume function to notify all these active pipes. The pipes then initiate the send requests as before. If a TCP-v has gone to sleep due to running out of available congestion window space, it would be awakened through a receive call that can potentially open up more space. The open and close functions are used by pTCP to invoke the open and close functions of TCP-v during connection setup and teardown. The established and closed calls are used by TCPv to inform pTCP of successfully accomplished open and close requests. The shrunk call is used by TCP-v to inform pTCP about any reduction in the congestion window size and the consequent dropout of packets from within the window. pTCP unbinds these packets as discussed earlier.

outstanding packets in a pipe is proportional to the bandwidth along the correspeonding pipe estimated by the concerned TCP-v. The delayed binding strategy further ensures that all bound packets are already in transit in the network.

PROTOCOL OVERVIEW We now provide an overview of the protocol operations in pTCP, including connection management, congestion control, flow control, and reliability. We focus on how pTCP can build atop the protocol operations in TCP-v (namely TCP). Connection management: The number of TCP-v pipes to open is conveyed to pTCP as a socket option when the application opens the pTCP socket. The connection setup involves

IEEE Communications Magazine • April 2005 Authorized licensed use limited to: UNIVERSITY PUTRA MALAYSIA. Downloaded on April 7, 2009 at 03:29 from IEEE Xplore. Restrictions apply.

117

HSIEH LAYOUT

3/17/05

12:45 PM

Page 118

Overlay network (2)

S0

D2

w=1 w=3 Bottleneck (1)

S1

D1

ject to congestion window availability) enough data to fill in the advertised buffer space. pTCP, being a single point of departure for all transmissions, tackles this problem by sending a FREEZE message to the concerned TCP-vs once it knows that the total number of outstanding packets is equal to the advertised window space. Due to lack of space we do not present the pTCP state diagram, packet header formats, and handshakes during connection setup and teardown. We refer interested readers to [6] for more information on these subjects.

w=2

PTCP AND QOS

Assured service (3) S2 D0

n Figure 2. pTCP support for different Internet QoS domains. recursively invoking the setup of the TCP-v pipes. pTCP allows data transfer to begin as soon as one of the pipes reaches the established state in the TCP state diagram. Similarly, for connection teardown the close command is issued to all TCP-v’s. Unlike for open, pTCP waits for all the pipes to be closed before returning the closed status to the application. Congestion control: Congestion control is the sole responsibility of TCP-v. pTCP does not require a specific congestion control mechanism to be used for each pipe. One of the salient features of pTCP is decoupling of the pTCP functionality from that of TCP-v. This in turn allows each TCP-v to use the most appropriate congestion control mechanism tailored to the characteristics of the path it traverses for achieving optimal performance. Reliability: pTCP primarily relies on support from TCP-v to achieve reliability. A packet once bound to a TCP-v is expected to be delivered reliably therein. The exception to this rule is when pTCP unbinds a packet and restripes it on a different pipe. After restriping, it is the responsibility of the new pipe to reliably deliver the bound data to the receiver. In this scenario, when (and if) the old pipe attempts a retransmission of the packet, a new packet will be bound to the transmission. In other words, a retransmission at TCP-v does not need to be a retransmission for the striped connection. This is possible because of the decoupling of functionalities between pTCP and TCP-v. Flow control: Since TCP-v operates only with virtual buffers, it does not need to perform flow control. Hence, pTCP reuses the window advertisement field already available in the TCP header for performing flow control. Because the overall available buffer size is an upper bound on the number of transmissions a single TCP-v can perform, the pTCP flow control mechanisms do not interfere with the TCP-v operations under such overloading of the window field. Note that all TCP-vs see the same advertisement and might individually attempt to transmit (sub-

118

In this section we show how pTCP, the parallel instantiation of TCP, can act as a QoS enabler in different QoS domains identified earlier. We overload Fig. 2 with three different scenarios, where: • Multiple flows (S 0 → D 0 , S 1 → D l , S 2 → D2) with different weights share the same bottleneck in a best effort environment. • Hosts participating in the overlay network are provided with multiple paths to their destinations (S0 to S0). • Hosts subscribe to the assured service in addition to the best effort or relative service differentiation provided by the network (S2 to D2). The goal is to demonstrate that pTCP can effectively: • Allow multiple differentiation levels in a best effort environment without any support from the network infrastructure • Provide the aggregate resources available to applications on end hosts that have access to multiple paths through overlay services • Enable applications with subscriptions to multiple QoS service classes to enjoy the aggregate of the services offered by the different classes

SUPPORTING MULTIPLE DIFFERENTIATION LEVELS In a best effort network without any infrastructure support for QoS, it is still possible to provide applications with service differentiation in a relative fashion. In particular, weighted (proportional) service differentiation that provides applications with service proportional to preassigned weights stands out for its ability to achieve predictable and controllable differentiation. For example, if two connections with weights wl and w2 share the same bottleneck, their respective data rates rl and r2 will be in the ratio wl:w2 under weighted service differentiation. TCP, through its AIMD congestion control mechanism, is designed to “fairly” share the bottleneck, and hence does not support the notion of weight and multiple differentiation levels in its fairness model. Although related work for enabling weighted fairness in TCP has proposed the weighted AIMD (WAIMD) scheme that adapts TCP’s congestion control parameters (α, β) based on the desired weight [2], it shows poor performance that does not scale beyond nominal weights (less than 10). Note that a weighted flow of weight w should ideally exhibit the behavior equivalent to the aggregate behavior of w

IEEE Communications Magazine • April 2005 Authorized licensed use limited to: UNIVERSITY PUTRA MALAYSIA. Downloaded on April 7, 2009 at 03:29 from IEEE Xplore. Restrictions apply.

12:45 PM

Page 119

TCP flows. However, the aggressiveness introduced by WAIMD to emulate this behavior makes it suffer from the resultant burstiness [3]. pTCP, by way of the multistate design and effective striping, can achieve the desired aggregate behavior (and data rate) by maintaining multiple states according to the desired weight. We consider a multilink dumbbell topology similar to that shown in Fig. 2 for providing relative service differentiation to flows sharing one bottleneck in the network. 3 Figure 3 shows the result for one weighted flow and nine unit flows (default TCP flows), in addition to background traffic, sharing the bottleneck. We measure the ratio of the throughput achieved by the weighted flow to that achieved by the unit flows (on average), as the weight used increases from 1 to 100. It is clear from the figure that pTCP achieves much better performance in terms of the maximum weight that can be used to achieve the desired service differentiation (note its performance closely tracks that of the “ideal” curve). WAIMD, on the other hand, does not scale beyond 10, as also shown in related work [2, 3]. Note that maintaining multiple states in accordance with the desired weight allows pTCP not only to support better scalability, but also to achieve better fairness as the weight increases. We refer interested readers to [14] for a comparison of the fairness property between pTCP and WAIMD.

SUPPORTING MULTIPLE NETWORK RESOURCES While the stubbornness to changes exhibited by the Internet infrastructure limits the widespread deployment of many QoS provisioning mechanisms, recent changes in the Internet have indicated the possibility of enhancing the QoS experienced by end hosts without revamping the entire Internet. Overlay networks, for example, have been proposed to deploy new services without changing the Internet substrate. A considerable amount of effort has in particular focused on addressing the suboptimality of Internet routing, proposing different approaches to achieve QoS routing and multipath routing using overlay networks [4, 5]. End hosts participating in such overlay networks thus can potentially use multiple paths for achieving better QoS that a single path cannot provide. However, as discussed earlier, TCP is unable to support the use of multiple paths due to its single-path design. pTCP, on the other hand, can achieve effective striping and hence enables the use of multiple network resources. We consider an overlay network similar to that shown in Fig. 2 for providing multiple paths between any given pair of source and destination. Figure 4 shows the performance of pTCP to achieve the aggregate bandwidth as the number of paths (pipes) increases. We introduce background UDP and TCP traffic in individual paths, and compare the aggregate bandwidth achieved in pTCP against that in application layer striping (multiple sockets). We observe that not only does application layer striping incur application complexities and protocol overheads as discussed earlier, but it also does not scale with the number of paths used in the connection. pTCP, on the other hand, can achieve better performance irrespective of the number of paths used. Therefore, it is clear that pTCP

100 Ideal pTCP Weighted AIMD

90 80 70 Throughput ratio

3/17/05

60 50 40 30 20 10 0 0

10

20

30

40

50

60

70

80

90

100

Weight

n Figure 3. Supporting multiple differentiation levels. 22 pTCP Multiple sockets

20 Application throughput (Mb/s)

HSIEH LAYOUT

18 16 14 12 10 8 6 4 2 0 1

2

3

4

5 6 Number of paths

7

8

9

10

n Figure 4. Supporting multiple network resources. allows a connection to be split across multiple paths, and enables the underlying network to perform more flexible resource allocation for increasing network utilization.

SUPPORTING MULTIPLE SERVICE MODELS The Internet in the future will provide a variety of service models including best effort, minimum rate, assured, and premium services. This coupled with the continuing growth of the backbone bandwidth prompts a scenario where the user, while subscribing to some controlled load services for minimum bandwidth guarantee, can still benefit from using best effort services for the residual network bandwidth. However, it has been shown in [7] that TCP, designed for cooperative sharing in best effort networks, cannot effectively deliver the combined QoS when the application uses both best effort and assured services. The main reason is that TCP “blindly” adapts its sending rate upon experiencing losses;

IEEE Communications Magazine • April 2005 Authorized licensed use limited to: UNIVERSITY PUTRA MALAYSIA. Downloaded on April 7, 2009 at 03:29 from IEEE Xplore. Restrictions apply.

3

To stay within the focus of this article, we do not delve into details of the simulation setup for the performance evaluation. However, interested readers please refer to [14] for a more detailed presentation, including the network topology and traffic model.

119

HSIEH LAYOUT

3/17/05

12:45 PM

Page 120

8 pTCP (aggregate) 7

Throughput (Mb/s)

6 5 4 3

Pipe 0 (best effort service)

2 1 Pipe 1 (assured service) 0 10

20

30

40

50 60 Time (s)

70

80

90

100

n Figure 5. Supporting multiple service models (service aggregation). thus, when it cuts down the congestion window, not only does it reduce its bandwidth share in the best effort network, it also undesirably reduces the usage of bandwidth provided by assured service. This is another instantiation of the drawbacks of the single-state design in TCP that cannot distinguish the heterogeneous services provided by the underlying networks. As discussed earlier, pTCP by virtue of its decoupling of functionalities design can maintain different pipes with each pipe potentially using different congestion control mechanisms, depending on the characteristics of the service used. In the following, we show how this property of pTCP allows efficient aggregation of the bandwidths offered by the different services. We consider a scenario where the network provides both best effort and assured services to end hosts similar to that shown in Fig. 2. While related work has shown that TCP is unable to effectively deliver the composite service to the

CONCLUSIONS

6 pTCP (aggregate)

In this article we show that TCP, the predominantly used transport layer protocol in the Internet, cannot deliver any desired QoS to applications irrespective of the degree of network support. We introduce the concept of parallel transport, and propose a parallel transport protocol for TCP called pTCP for enabling Internet QoS. pTCP is a multistate extension of TCP that effectively tackles the different dimensions of multiplicity including multiple differentiation levels, multiple network resources, and multiple service models, and hence enables varying classes of QoS support to applications. Simulation results sow that pTCP can effectively deliver the desired QoS in three QoS domains.

Throughput (Mb/s)

5

4

3

Pipe 0 (data rate fluctuation)

2

Pipe 1 (pTCP flow control)

1

0 10

20

30

40

50 60 Time (s)

70

80

90

n Figure 6. Supporting multiple service models (service composition). 120

application, Fig. 5 shows that pTCP can achieve the aggregate services provided through individual service models. This is possible in pTCP by opening one pipe for each subscribed service. For the best effort service, pTCP opens a pipe (pipe 0) that uses the default TCP congestion control mechanism (TCP-SACK) to adapt to the data rate fluctuations caused by the background traffic load. For assured service, pTCP opens a pipe (pipe 1) that maintains its congestion window at the reserved bandwidth level without any blind adaptation. Since pTCP effectively decouples the progression of individual pipes in the aggregate connection, any fluctuation observed in the first pipe has no impact on the throughput achieved in the second pipe, and hence the instantaneous throughput achieved in pTCP closely tracks the sum of individual throughputs. Figure 6 shows a different scenario where the host primarily uses best effort service to achieve a desired data rate; it uses assured service only when the data rate achieved falls short. The advantage of such “opportunistic” service composition is that the host can achieve the desired QoS while incurring the minimum cost. By opening two default TCP-v pipes and using a simple token bucket algorithm for performing flow control, pTCP dynamically freezes and defreezes the second pipe (assured service) depending on the amount of data sent through the first pipe (best effort service). As shown in Fig. 6, while the maximum data rate subscribed in the second pipe is 2 Mb/s, the actual amount of data sent shows a “mirror” of the data rate fluctuations observed in the first pipe. Therefore, despite the data rate fluctuations introduced by the best effort service, using pTCP the application is able to smoothly enjoy the desired data rate (5 Mb/s) that cannot be provided by the assured service alone. Note that this scenario is equally applicable to a scenario with multihomed hosts. For example, a mobile host equipped with both the WiFi and 3G interfaces can primarily use the WiFi service, but uses the 3G service only to mitigate the bandwidth fluctuations observed in the WiFi network, thus benefiting from using multiple network interfaces in a cost-effective fashion.

100

REFERENCES [1] C. Fraleigh et al., “Packet-level Traffic Measurements from the Sprint IP Backbone,” IEEE Network, vol. 17, no. 6, Nov./Dec. 2003, pp. 6–16.

IEEE Communications Magazine • April 2005 Authorized licensed use limited to: UNIVERSITY PUTRA MALAYSIA. Downloaded on April 7, 2009 at 03:29 from IEEE Xplore. Restrictions apply.

HSIEH LAYOUT

3/17/05

12:45 PM

Page 121

[2] J. Crowcroft and P. Oechslin, “Differentiated End-to-End Internet Services Using a Weighted Proportional Fair Sharing TCP,” ACM Comp. Commun. Rev., vol. 28, no. 3, July 1998, pp. 53–69. [3] T. Nandagopal et al., “Scalable Service Differentiation Using Purely End-to-End Mechanisms: Features and Limitations,” Proc. IWQoS, Pittsburgh, PA, June 2000. [4] D. Andersen et al., “Resilient Overlay Networks,” Proc. ACM SOSP, Banff, Canada, Oct. 2001. [5] Z. Li and P. Mohapatra, “QRON: QoS-Aware Routing in Overlay Networks,” IEEE JSAC, vol. 22, no. 1, Jan. 2004, pp. 29–40. [6] H.-Y. Hsieh and R. Sivakumar, “A Transport Layer Approach for Achieving Aggregate Bandwidths on Multihomed Mobile Hosts,” Proc. ACM MOBICOM, Atlanta, GA, Sept. 2002. [7] W. Feng et al., “Understanding and Improving TCP Performance over Networks with Minimum Rate Guarantees,” IEEE/ACM Trans. Net., vol. 7, no. 2, Apr. 1999, pp. 173–87. [8] R. Stewart et al., “Stream Control Transmission Protocol,” IETF RFC 2960, Oct. 2000. [9] M. Zhang et al., “RR-TCP: A Reordering Robust TCP with DSACK,” Proc. IEEE ICNP, Atlanta, GA, Nov. 2003. [10] K. Sklower et al., “The PPP Multilink Protocol,” IETF RFC 1990, Aug. 1996. [11] IEEE 802.3ad Link Aggregation Task Force, IEEE Std 802.3-2002, Mar. 2002. [12] P. Rodriguez and E. Biersack, “Dynamic Parallel-Access to Replicated Content in the Internet,” IEEE/ACM Trans. Net., vol. 10, no. 4, Aug. 2002, pp. 455–64.

EMERGING

[13] H. Sivakumar, S. Bailey, and R. Grossman, “PSockets: the Case for Application-Level Network Striping for Data Intensive Applications Using High Speed Wide Area Networks,” Proc. IEEE/ACM Supercomp., Dallas, TX, Nov. 2000. [14] H.-Y. Hsieh and R. Sivakumar, “pTCP: An End-to-End Transport Layer Protocol for Striped Connections,” Proc. IEEE ICNP, Paris, France, Nov. 2002.

BIOGRAPHIES HUNG-YUN HSIEH ([email protected]) received B.S. and M.S. degrees in electrical engineering from National Taiwan University, Taipei, ROC, and a Ph.D. degree in electrical and computer engineering from the Georgia Institute of Technology, Atlanta. He joined the Department of Electrical Engineering and the Graduate Institute of Communication Engineering at National Taiwan University as an assistant professor in August 2004. His research interests include wireless systems, mobile computing, and network protocols. R AGHUPATHY S IVAKUMAR ([email protected]) received a B.E. degree in computer science from Anna University, India, in 1996, and Master's and doctoral degrees in computer science from the University of Illinois at UrbanaChampaign in 1998 and 2000, respectively. He joined the School of Electrical and Computer Engineering at the Georgia Institute of Technology as an assistant professor in August 2000. His research interests are in wireless network protocols, mobile computing, and network QoS.

pTCP is a multistate extension of TCP that effectively tackles the different dimensions of multiplicity including multiple differentiation levels, multiple network resources, and multiple service models, and hence enables varying classes of QoS support to applications.

IEEE COMMUNICATIONS MAGAZINE CALL FOR PAPERS TECHNOLOGIES FOR 3G CELLULAR WIRELESS COMMUNICATIONS SYSTEMS

Background Recently, several enhancements have been introduced into the 3G cellular wireless standards, resulting in, among other things, improved spectral efficiency and improved performance of a variety of applications (both delay-tolerant and latency-sensitive). As a consequence, the performance and capabilities of the 3G wireless technologies already rivals that of some technologies proposed as next-generation wireless systems. The use of novel techniques in the early evolution of the 3G wireless systems, such as cdma2000 1xEV-DO Revisions 0 and A in 3GPP2 and WCDMA HSDPA/EUL in 3GPP, provide substantial improvement in spectral efficiency and comprehensive network control and resource allocation to deliver services as demanded by end-user applications. This evolution is expected to continue, in order to further enhance the performance and capabilities of the 3G cellular standards. In addition to providing insight into the details of the techniques used in these technologies, this issue will serve as a stimulus to accelerate technological adoption of enhanced 3G (E3G) wireless technologies and provide a performance and capability benchmark of next-generation multiple-access systems. Scope of Contributions The papers in this Feature Topic will focus on state-of-the-art research in various aspects of next generation multiple access technologies suitable for 3G wireless. We solicit papers covering a variety of topics that include, but not limited to, the following subjects: •3G evolution - cdma2000 (1x and 1xEV-DO) and WCDMA (HSDPA/EUL) •Further 3G evolution towards all-IP optimized wireless air-interfaces •System implementation and optimization •Applications performance (VoIP, video telephony, wireless gaming, file transfer, e-mail, web-browsing, etc.) •MAI reduction techniques •Dynamic channel/resource allocation and assignment •Adaptive modulations and coding •Fast adaptive equalization •Novel power control techniques •Synchronization control •Standardization issues Papers should be of tutorial nature or contain state-of-the-art research and development materials. Authors must follow the IEEE Communications Magazine guidelines regarding the manuscript format. For further details, please refer to “Information for Authors” in IEEE Communications Magazine Web site at http://www.comsoc.org/pubs/commag/sub_guidelines.html Tentative Schedule Manuscript Submission: Acceptance Notification: Final Manuscript Due: Publication:

April 30, 2005 October 30, 2005 November 30, 2005 February 2006

IEEE Communications Magazine • April 2005 Authorized licensed use limited to: UNIVERSITY PUTRA MALAYSIA. Downloaded on April 7, 2009 at 03:29 from IEEE Xplore. Restrictions apply.

121

Parallel transport

Apr 30, 2005 - consequence, the performance and capabilities of the 3G wireless technologies already rivals that of some technologies proposed as.

137KB Sizes 5 Downloads 287 Views

Recommend Documents

Tourist Transport Management (Road Transport).pdf
Page 1 of 2. I Mrrwil]. MASTER'S IN TOURISM MANAGEMENT. Term-End Examination. December, 2OO8. MTM-14 : TOURIST TRANSPORT. MANAGEMENT ...

air transport -
help if a flight is needed from London. This problem is especially hard to solve. To the opposite of machines people need to sleep and recover. .... [7] Capital intensive: Air transport development requires huge capital investment. Modern traffic con

Tourist Transport Management (Road Transport).PDF
(iii) Marketing Mix in Tourist Transport. Operations. MTM-14 2 2,500. Page 2 of 2. Main menu. Displaying Tourist Transport Management (Road Transport).PDF.

TRANSPORT MANAGEMENT.pdf
Explain basic factors considered in crew scheduling. 3. List the responsibilities of a dealer. 4. Describe the classifications of roads. 5. Write note on MACT. 6.

Xcelium Parallel Simulator - Cadence
views. Shown here: Context-aware activity for finite state machine analysis. .... are big misses that can escape IP and subsystem .... External Data ... Plan. Figure 6: The vManager platform's advanced verification methodology control cycle ...

Parallel Universes
The Sloan Digital Sky Survey has found ∆M/M as small as 1% on the scale R ~ 1025m and cosmic mi- ..... up until the point when she answers the question. ∗∗∗Indeed, the standard mental picture of what the physical ..... structure would not cor

Parallel Seq Scan
PARAMS_EXEC parameters (Execution time params required for evaluation of subselects). – Tuple Queues, to send tuples from worker to master backend.

Parallel Scientific Advice
*FDA pre-meeting at least 8 business days before FDA/EMA SAWP2 meeting ... Sponsor sends in a revised proposal and meeting package prior to SAWP3.

Parallel HDF5
Jul 25, 2011 - July 25, 2011. CScADS'11. 3. Brief History of HDF. 1987 At NCSA (University of Illinois), a task force formed to create an architecture-independent format and library: AEHOO (All Encompassing Hierarchical Object Oriented format). Becam

Parallel Universes
CREDIT ALFRED T. KAMAJIAN ( background. ); CORNELIA BLIK ( top inset. ); SARA CHEN ( ..... game Tetris while in college. ..... servers—the frog perspective.

Parallel Processing.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Parallel ...

Parallel Automaton
parallel/synchronization automata table. ... changing conditions, by just changing the automata table. 2. ..... condition is reached (according to a reference-table.

Parallel Computing.pdf
4. (a) Write short notes on the following : 10. (i) Spin lock mechanism for ... 71. 5 j. MCSE-011 3. Page 3 of 3. Main menu. Displaying Parallel Computing.pdf.

Heterogeneous Parallel Programming - GitHub
The course covers data parallel execution models, memory ... PLEASE NOTE: THE ONLINE COURSERA OFFERING OF THIS CLASS DOES NOT ... DOES NOT CONFER AN ILLINOIS DEGREE; AND IT DOES NOT VERIFY THE IDENTITY OF ...

Sustainable Transport Strategy.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Sustainable ...

Land Transport Sector - Phillip Securities
Aug 4, 2015 - INDIA. PhillipCapital (India) Private Limited. No.1, 18th Floor. Urmi Estate. 95, Ganpatrao Kadam Marg. Lower Parel West, Mumbai 400-013. Maharashtra, India ... Phillip Bank Plc. Ground Floor of B-Office Centre,#61-64, Norodom Blvd. Cor

Transport-GHP-Label.pdf
Have person sip a glass of water if able to swallow. • Do not induce .... 3) Treat the trunks of trees that have carpenter ant trails or .... Transport-GHP-Label.pdf.

Transport Layer Protocols.pdf
2.3.2 IPv6 PSEUDO-HEADER..................................................................................................................23. 2.4 Reliability and congestion control solutions............................................................

finite speed transport
be chosen arbitrarily to best describe the physics one is .... rium, Imperial College Press, 1997. ... Israel W., Covariant Fluid Mechanics and Thermodynam-.