A Spike-Detecting AQM to deal with Elephants Dinil Mon Divakaran Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117576

Abstract The current TCP/IP architecture is known to be biased against flows of small sizes — small flows (or mice) — in the network, thereby affecting the completion times of small flows. A common approach taken to solve this problem is to prioritize small flows over large flows (elephants) during the packet-scheduling phase in the router. Past studies have shown that such ‘size-based’ priority schedulers improve the completion times of small flows with negligible affects on the completion times of large flows. On the flip side, most approaches are not scalable with increasing traffic, as they need to trace flows and estimate ongoing sizes of active flows in the router. In this context, this work attempts to improve the performance of small flows using an active queue management (AQM) system, without needing to track sizes of flows. The core idea is to exploit TCP property in detecting large ‘spikes’ and hence large flows, from which packets are dropped, and importantly, only at times of congestion. In this way, we use only a single queue, diverting from the multi-queueing systems used in size-based schedulers. We propose two spike-detecting AQM policies: (i) SDS-AQM that drops packets deterministically, and (ii) SDI-AQM that drops packets randomly. Using a simple Markov Chain model, we compare these new policies with the well-known RED AQM, highlighting the loss behaviour. We also perform simulations, and using a number of metrics, compare the performance of (mostly) small flows obtained under the new AQMs against that obtained under the traditional drop-tail buffer, RED as well as a size-based flow-scheduler PS+PS. Surprisingly, RED is seen to give better performance than the size-based flow-scheduler developed specifically for improving the I

This work was done when the author was affiliated with IIT Mandi.

II

This article is an extended version of the paper published in IEEE IPCCC 2011 [1]. In comparison to the conference paper, Section 2 discusses elaborately on related works, Section 4 is new, providing insights using a model based on Markov Chain, and Section 6 presents more results from simulations. Email address: [email protected] (Dinil Mon Divakaran)

Preprint submitted to Computer Networks

March 20, 2012

response times of small flows. Further, we find that the spike-detecting AQM policies give better performance to small flows than any other policy (including RED). Of the three scenarios we consider, two experiment with different buffer sizes — one with large buffer size (BDP) and another with small size (fraction of BDP). The third scenario considers the case where slow and fast flows compete. The results show that the spike-detecting AQM policies, unlike other policies, consistently give improved performance to small flows in all three scenarios. Of the two, the SDI-AQM performs better with respect to some metrics. Keywords: AQM, QoS, Elephants, Flows, Markov

1. Introduction Internet flow-size distribution exhibits strong heavy-tail behaviour. This means that a small percentage of flows contribute to a large percentage of the Internet’s traffic volume [2]. It is also known as the 80-20 rule, as 20% of the flows carry 80% of the bytes. It has become customary to call the large number of small flows as mice flows, and the small number of large flows as elephant flows. Examples of mice flows include tweets, chats, web-search queries, HTTP requests, etc., for which users expect very short response times1 ; while elephant flows are usually the downloads that run in the background (say, a kernel image or a movie, involving MBs and GBs of data), the response times for which are expected to be higher than that for the mice flows by orders of magnitude. The current Internet architecture has an FCFS server and a drop-tail buffer at most of its nodes. This along with the fact that most of the flows in the Internet are carried by TCP [3], hurt the response times of mice flows adversely. Specifically, some of the important reasons for the biases are: • As mice flows do not have much data, they almost always complete in the slow-start phase, never reaching the congestion-avoidance phase; and thus typically having a small throughput. • A packet-loss to a small flow most often results in a time-out due to the small congestion-window (cwnd) size; and time-outs increase the completion times of small flow many-fold. On the other hand, a large flow is most probably in the congestion-avoidance phase, and hence has 1

We often use ‘completion time’ to refer to ‘response time’.

2

congestion-windows of large sizes. Therefore, for a large flow, packetlosses are usually detected using duplicate ACKs, instead of time-outs, thereby being able to recover faster. • The increase in round-trip-time (RTT) due to large queueing delays hurts the small flows more than the large flows. Again, for the large flows, the large cwnd makes up for the increase in RTT. The biases against small flows become more relevant today — recent studies show an increase in the mice-elephant phenomenon, with a stronger shift towards a 90-10 rule [4]. Most solutions to this problem can be grouped into a class of priority-scheduling mechanisms that schedule packets based on the ongoing sizes of the flows they belong to. The priority schedulers, which we hereafter refer as size-based schedulers, give priority to ‘potential’ small flows over large flows, thereby improving the response time of small flows. They range from SRPT [5] to LAS [6] to MLPS scheduling policies [7]. The different size-based schedulers need to identify flows and distinguish between small and large flows. Most of these mechanisms have multiple queues with different priorities, and use the information of ongoing flow-sizes to decide on where to queue an incoming packet. Other works give priority to packets of small flows in space, that is in buffer. We observe that most of such works dealing with giving preferential treatment (either in space and/or in time) based on the size assume that the router keeps track of sizes of all flows. This assumption is challenged by the scalability factor, since tracking flow-size information requires flow-table update for every arriving packet. Given that the action involves lookup, memory access and update, this will require fast access as well as high power. Besides, as the number of flows in progress can grow to a large value under high load, this can also become an overhead. Hence, most existing solutions face a roadblock when it comes to implementation. The spike-detecting AQM proposed here is inspired from the TLPS/SD (two-level-processor-sharing with spike-detection) system proposed in [8]. In TLPS/SD, a large flow is served in the high-priority queue, until it is detected as ‘large,’ which happens when its congestion-window is large enough (> 2η ) to ‘cause’ congestion (buffer length > β) in the link (for predetermined values of η and β). Such detected large flows are de-prioritized by serving them in a low-priority queue thereafter. In this paper, we present spike-detecting AQMs (SD-AQMs in short). The major difference between SD-AQMs and the existing works that improve the response times of small flows (in comparison to the drop-tail buffer with 3

FCFS scheduler), is that, SD-AQMs do not need to identify small and large flows. Second, these new AQMs do not need to track sizes of flows. Third, they use a single queue, removing the need for two or more virtual queues. Based on the core simple idea of detecting spikes, we present two policies: (i) SDS-AQM that drops packets deterministically, and (ii) SDI-AQM that drops packets randomly. We use ‘SD-AQM policies’ to refer to both these policies together. As far as we know, there is also no existing work that study the performance of small flows under the well-known RED (random-early-drop) AQM policy [9]. Therefore, in this work, we also analyze performance of the RED AQM. Using simple Markov Chain model we compare the performance of RED and SD-AQM policies, and highlight why small flows gain under SDAQM policies in comparison to RED. We then use simulations to study the performance of small flows under SD-AQM policies, RED, the traditional drop-tail (with FCFS scheduler) and PS+PS scheduler — a size-based scheduling strategy developed specifically to improve the response times of small flows. We perform studies using various metrics under three scenarios: (i) with bottleneck buffer size equal to BDP, (ii) with small bottleneck buffer (of size 1000 packets), and (iii) where slow and fast flows compete. In general, our observations reveal that small flows perform worst under the drop-tail. In comparison to drop-tail, the affects on large flows are negligible under other policies. The results in the large-buffer scenario show that while small flows under RED get similar performance as under PS+PS, medium- and large-size flows are less penalized in RED; whereas in the small-buffer scenario, RED performs better than PS+PS even for small flows. In both the scenarios, the SD-AQM policies are observed to perform better than all other policies. Not only small flows, but also medium- and large-size flows get better performance in SD-AQM policies. Between the two SD-AQM policies, SDI-AQM is a better choice, as it induces lesser number of timeouts on the overall traffic. The performance of SDI policy becomes more evident in the third scenario, where the flows taking the path with larger RTTs face much lesser number of timeouts and congestion-window cuts in comparison to other policies. The remaining of this paper is organized as follows. Next section discusses the previous works on mitigating biases against small flows. In Section 3, two spike-detecting AQM policies are proposed and developed. We present a Markov model in Section 4 for comparing the SD-AQM policies with RED. In Section 5, we give the goals, settings, and scenarios of the simulations. We then evaluate the performance of our proposed policies as well as RED in Section 6, before concluding in Section 7. 4

2. Related works: Mitigating the bias A general approach to solving the problems due biases, and thereby improve the performance — most often, the completion time (among other metrics) — of small flows, is to prioritize small flows. Priority can be given in either or both of the two known dimensions: space and time. While scheduling algorithms give priority in time, buffer management policies (and even routing, which we do not discuss here) give priority in space. 2.1. Prioritization in space Active queue management (AQM) schemes are used to send congestion signals from intermediate routers to end-hosts. One such AQM policy is random early detection, or RED [9]. Introduced as a mechanism to desynchronize concurrent TCP flows for better link utilization, RED uses an (exponentially-averaged) queue-length to mark or drop packets (once the queue-length crosses a minimum threshold, min th). Simply put, the marking probability increases with the average queue-length. Though, as per our knowledge, there exists no study showing how small flows would perform under RED, RED-based approaches were proposed to control large bandwidth consuming flows. One idea is to first detect the bandwidth-consuming flows, and then assign higher drop rates to such flows in the RED queue [10, 11], thereby dropping lesser packets from other flows. Another work used RIO (RED with In and Out) at the bottleneck queue to drop packets of large flows at a much higher rate than packets of small flows [12]. To facilitate this, an architecture was proposed where the edge routers mark packets as belonging to small or large flow, using a threshold-based classification. 2.2. Prioritization in time: Scheduling Priority-based scheduling gives priority to packets of one type over packets of other types; i.e., packets of higher priority always have precedence over packets of lower priority. Size-based priority scheduling strategies can be classified into two, based on whether the strategies have knowledge of the flow size in advance or not: anticipating and non-anticipating strategies. 2.2.1. Anticipating strategies Anticipating strategies assume knowledge of job2 size on its arrival to the system. One such policy is the shortest-remaining-processing-time (SRPT) 2

A job can be a process in an operating system, a file stored on a machine, or a flow in the Internet.

5

policy, which always serves the job in the system that needs the shortest remaining service time. SRPT is known to be optimal among all policies with respect to the mean response time [5]. The improvement of the response time brought by SRPT, over processor-sharing discipline (or PS – an approximation of bandwidth-sharing in the Internet at the flow-level under some assumptions [13]), becomes striking with the variability in the service time distribution. Therefore, it finds use in Internet scenarios, where the file-size distribution is known to have high variability. SRPT scheduling has been used effectively in reducing the response time in web servers [14, 15]. The disadvantage of the policy comes from the need to anticipate the job size. While this information is available in web servers, schedulers in routers do not have knowledge of the size of a newly arriving flow. Therefore, policies that are blind, i.e., that do not require the knowledge of flow size in advance are suitable for scheduling in the Internet. 2.2.2. Non-anticipating strategies Non-anticipating policies instead use the ongoing size, or age, of a flow for taking scheduling decisions. The ongoing size of the flow is the size it has attained until the current scheduling instance. This gives an indication of the remaining size of the flow. The use of the age of flows is particularly interesting in scenarios where the flows that have been served for a long time, are likely much larger, and thus have larger remaining size. This brings in the notion of hazard rate. If F denotes 0 the cumulative distribution of flow F (x) sizes, then the hazard rate, h(x) = 1−F (x) . Distributions have decreasing hazard rate (DHR) if h(x) is non-increasing for all x ≥ 0. If flow-size distribution comes from the class of DHR distributions, then, intuitively, it means that the flows with larger ongoing size have smaller hazard rate, and are thus less likely to complete. As many heavy-tailed distributions, like Pareto distribution, fall under the DHR class, non-anticipating strategies have been a focus of interesting research in the area of flow-scheduling. In the following, we brief some important non-anticipating scheduling strategies. FB or LAS scheduling: FB (foreground-background) policy gives priority to the flow that has the minimum ongoing size (meaning, the flow that has sent the least) among all the competing flows, and serves it before serving any other flow. If there are multiple such flows (with minimum ongoing size), all of them share the bandwidth equally like in a PS policy [7]. FB policy is shown to be optimal with respect to the mean response time when the distributions have decreasing hazard rate [16]. Further details can be found in an extensive survey by Nuyens and Wierman [17]. This scheduling policy has been studied for flows at a bottleneck queue [6, 6

18], where the policy is called LAS — least-attained-service. Here again, the flow to be served next is the one with the least attained service yet. The study shows that, the policy not only decreases the delay and the loss rate of small flows compared to a FCFS scheduler with drop-tail buffer, but causes negligible increase in delay for large flows. In a TCP/IP network, implementation of LAS requires the knowledge of the running number of the packets of each flow, so as to find the youngest ongoing flow. This, along with the other drawbacks, such as unfairness and scalability issue, have motivated researchers to explore other means of giving priority to small flows, one such being the strict P S + P S model proposed in [19]. Besides, for distributions that are DHR only for the tail of the distribution (like the Pareto distribution bounded away from zero) FB may not be optimal [20]. PS+PS scheduling: The PS+PS scheduling [19], as the name indicates, uses two processor-sharing queues, with strict priority between them. The first θ packets of every flow are served in the high-priority queue, say Q1 , and the remaining packets (if any) are served in the low-priority queue, say Q2 . Hence all flows of size less than or equal to θ get served in the highpriority queue. Observe that, for all flows with size x > θ, the first θ packets are also served in Q1 . The mean of the flow-size distribution (at Q1 ) thus turns out to be the mean of the truncated distribution,  F (x) if x ≤ θ Ft (x) = 1 otherwise. This reduces the load in Q1 ; and since Q1 is an M/G/1 − P S with a new x distribution, the conditional mean response time, T (x) = 1−ρ , also reduces. The load in Q1 , and thereby the conditional mean response time of flows completing in Q1 , is dependent on the value of threshold θ. It is proved that the PS+PS model reduces the mean overall response time (E[T ]) in comparison to PS, for the DHR class of distributions. The authors (in [19]) also take a step forward in performance analysis of size-based scheduling systems, by analyzing another metric — maximum response time — other than the usual conditional mean response time. In addition, the authors proposed an implementation of this model; but it relies on TCP sequence numbers, requiring them to start from a set of possible initial numbers. This not only makes the scheme TCP-dependent, but also reduces the randomness of initial sequence numbers that TCP flows can have. MLPS discipline: The PS+PS model can be seen as a specific case of the multi-level-processor-sharing (MLPS) discipline [21]. In the context of prioritizing small flows, [22] demonstrates that the mean delay of a two-level MLPS can be close to that of FB in the case of Pareto and hyper-exponential 7

distributions, belonging to the DHR class. An ideal implementation of an MLPS discipline would require the size information of flows in the system. Sampling and scheduling: An intuitive way to track large flows is to use real-time sampling to detect large flows (thus classifying them), and to use this information for performing size-based scheduling. Since the requirement here is only to differentiate between small and large flows, the sampling strategy need not necessarily track the exact flow-sizes. A simple way to achieve this is to probabilistically sample arriving packets, and store the information of sampled flows along with the sampled packets of each flow [23]. SIFT, proposed in [24], uses such a sampling scheme along with the PS+PS scheduler. A flow is ‘small’ as long as it is not sampled. All such undetected flows go to the higher priority queue until they are sampled. The authors analyzed the system using ‘average delay’ (average of the delay of all small flows, and all large flows) for varying loads, as a performance metric. Though an important metric, it does not reveal the worst-case behaviour. This is more important here as the sampling strategy can induce false positives, i.e., small flows if sampled will be sent to the lower priority queue. Deviating from this simple strategy, [25] proposed to use a threshold-based sampling, derived from the well-known ‘Sample and Hold’ strategy [26], along with PS+PS scheduling. In this policy, the size of a sampled flow is tracked only until it crosses a threshold. This threshold can be the same as used in PS+PS scheduling to ensure that there are no false positives, but only false negatives. A similar threshold-based scheme is proposed and analyzed in [27]. TLPS/SD: Another method for prioritizing small flows uses two-levelprocessor-sharing scheduling along with spike-detection, where packets are assumed to arrive as ‘spikes’ [8]. With TCP, (as explained below) large spikes belong to large flows. Hence, to detect large flows, it is only required to detect large spikes, and that too, only at times of congestion. Detected large flows are sent to the low-priority queue Q2 until completion, and other flows would continue to be served in the high-priority queue Q1 . 3. Spike-detecting AQM The spike-detecting AQM we propose functions on the buffer of an outgoing link at a router. We refer to cwnd, the congestion-window of a TCP flow, as ‘spike’. PA TCP flow in the slow-start phase with a spike-size of 2η , has at least ηi=0 2i = 2η+1 − 1 packets (assuming an initial window of size one packet). If the flow was in congestion-avoidance phase, the size will be larger than 2η+1 − 1 packets depending on when the flow switched from 8

slow-start phase to congestion-avoidance phase. Given the above property of a TCP flow, the basic idea is to detect spikes of large sizes during times of congestion, and drop an arriving packet if it belongs to a ‘large’ spike. We consider a spike as large if its size is greater than 2η packets. In this context, we define a large flow as a flow that has a spike greater than 2η packets. That is, any flow with size greater than or equal to 2η+1 packets is a large flow. This definition of elephant flows is similar to that found in literature; i.e., flows with sizes beyond a pre-defined threshold are elephant flows. Since large spikes belong to large flows, this strategy of dropping an arriving packet belonging to a large spike (at times of congestion), will drop packets from large flows. The next question is, how to quantify congestion of a link. For this, as in [8], we observe the length of the buffer. Whenever the observed bufferlength exceeds β packets, we assume the link is congested, and the spikedetection mechanism is triggered. The spike-detecting mechanism classifies packets in the buffer as belonging to different spikes, and then finds the size of all spikes. If an arriving packet belonging to a large spike finds the buffer-length greater than β, it is dropped, or else it is queued. We assume that β < M , where M is the size of the buffer. It is worth noting that, such an AQM will not hurt flows with constant bit-rate, and flows that are too slow to enqueue considerable packets in the queue. Assumptions: We assume a TCP sender sends an entire cwnd in one go (thus forming a spike at a buffer). Since each spike is essentially a cwnd of a TCP flow, it can be identified using the common five tuple (source and destination IP addresses, source and destination ports, and protocol) used to identify flows. Observe that, as a TCP flow sends only one cwnd of packets in one round (RTT), no two spikes at the same time can belong to a single TCP flow. On parameters η and β, the values are such that, 0 < 2η < β. Algorithm 1 lists the function for enqueueing an incoming packet at the AQM buffer. The dequeueing operation removes the packet from the head, as in a FIFO queue, and hence is not listed. The variable Q denotes the physical buffer, and P the incoming packet at the router. spike(P) gives the spike to which the packet P belongs to. The function enque-fifo enqueues a packet at the end of FIFO buffer only if there is space for the packet, or else it is dropped. We refer to this AQM policy as SDS-AQM, or SDS in short. Observe that, in the above algorithm, whenever an arriving packet belonging to a large spike finds the buffer-length greater an β, it is dropped. As we assume packets arrive in spikes, it might happen that a burst of packets belonging to a spike (or even an entire spike) gets dropped. These burst losses not only make the network inefficient (as the packets in the dropped 9

Algorithm 1 Function: Enqueue(Packet P) 1: if size(P ) + size(Q) > β then 2: s ← spike(P ) 3: find size of spike s 4: if (size(s) > 2η ) then 5: drop(P) 6: return 7: end if 8: end if 9: enque-fifo(P)

burst have to be resent) but may also lead to timeouts (at the TCP sender); whereas our requirement is only to slow down a large flow temporarily, by informing the TCP sender. This slowing-down can be achieved by dropping just one packet from a flow, as the sending TCP would cut down its cwnd by half as soon as it receives three duplicated ACKs from the receiver (thereby conveying it has not received a packet in between). For reducing the number of dropped packets of a burst, and thereby the number of packet-losses, a simple strategy is to drop packets probabilistically. We do the following: for each spike that is large: compute the drop probability for a packet of the spike s ∈ S of size x as, p(x) = min

x 2η+φ

+1

 , 1.0

(1)

where φ ≥ 0. The parameter φ would decide the size of the largest spike that may be enqueued. If φ = 0, then every arriving packet belonging to a large spike will be dropped (if the link is congested). Also note that, every packet belonging to a spike will be dropped with the same probability. We call this improved AQM policy as SDI-AQM, or SDI in short. With the above computation of probabilities in place, when an arriving packet belonging to a large spike, say s0 , finds the link congested, the following would be done instead of line number 5 in Algorithm 1: • A coin is tossed with probability p(s0 ) for heads. • The packet is dropped if the packet gets a heads. Observe that, the probability is computed only for a large spike, hence the minimum size of a large spike (used in Eq. 1) is 2η + 1 (packets).

10

Cost: The cost of spike-size computation in line 3 is O(n) for queue-length n, as it computes the size of a spike by counting the number of packets in the buffer that belongs to the spike. Note that this is done only when an arriving packet finds n greater than β. Though it requires processing of buffered packets, the cost can be brought down by enqueueing the arrived packet and deferring the decision on it until the size is computed (which, with parallel-processing, can happen before the packet reaches the head of the buffer). This would require to drop packets buffered, which can be achieved if the buffer is implemented as a linked list. Though previous studies have shown that such in-buffer-drop strategies reduce queue oscillations [28], a study with respect to our proposed schemes is left for future work. 4. Analysis using model In this section, we model the three AQMs: SDS, SDI and RED, using an M X /M/1 finite queue. We approximate spikes as batches (of packets) arriving at a buffer. The general Markov model for SDS, SDI and RED AQMs with batch arrivals is shown in Fig. 1. λ is the arrival rate of batches, and µ is the service of packets. The batch size X is assumed to be geometrically distributed; the probability of a batch-size n is, bn = b(X = n) = r(1 − r)n−1 ,

(2)

where 0 < r < 1. The probability to transit from state i to state j, qi,j , is dependent on: • the current state i, • the size, j − i, of the arriving batch, such that j > i, • and the drop-probability function. Since the drop-probability function is different for SDS, SDI and RED AQMs, qi,j (and hence the Markov Chain) is different for all three. We S , q I and q R , for SDS, SDI and RED, respecdenote them specifically as qi,j i,j i,j tively. The transition probabilities for SDS-AQM, SDI-AQM and RED are defined in Section 4.1, Section 4.2 and Section 4.3, respectively. Let π = {π1 , π2 , π3 , . . . , πM } denote the stationary distribution. The πi ’s

11

λq0,M λq1,M −1 λq 2,M −2 λq1,3

λq0,3 λq0,2 λq0,1 0

λq1,1

1 µ

λqβ−1,2 λqβ−2,2 λqβ,2 λqβ−1,1 λqβ,1 λqβ+1,1

λq1,2

2

β

µ

µ

µ

λqM −2,2 λqM −1,1 M

β+1 µ

µ

µ

Figure 1: A Markov model for SDS, SDI and RED AQMs

can be obtained by solving the balance equations: π0 λ

M X

q0,j

= πµ,

j=1

 πk λ

M −k X

 qk,j

+ µ = πk+1 µ +

j=1

k−1 X

πi λqi,k−i ,

1≤k
i=0

πM µ =

M −1 X

πi λqi,M −i ;

(3)

i=0

We proceed to find the batch-loss probability for each AQM policy. 4.1. SDS-AQM The drop-probability in SDS-AQM is a function of both the instantaneous queue-length, k, and size of the arriving batch, x.  if ((k ≤ β) ∧ (x ≤ (M − k))) ∨  0 S ((k > β) ∧ ((x ≤ min(2η , M − k)))) ; δk,x = (4)  1 otherwise. The transition probability from state i to state j for SDS-AQM becomes,  S S qi,j = bj−i 1 − δi,j−i , 0 ≤ i < M, 0 < j ≤ M, j > i. (5) The steady-state probability can be obtained by solving the set of balance equations given before (refer Eq. 3), using the qi,j ’s as defined in 12

Eq. 5. For SDS-AQM, we denote the steady-state probabilities as π S = S }’s. The probability that an arriving batch of size x is {π1S , π2S , π3S , . . . , πM dropped, due to PASTA property is, ( P M πiS if x ≤ 2η ; Pβi=M −x+1 P PbS (x) = (6) M S S otherwise. β+1 πi i=0 πi Ii>(M −x)) + where Iz is an indicator function returning one if the expression z is true, and zero otherwise. Next, we want to compute the probability that a small flow will be blocked; in other words, the probability that a small flow will face packetloss during its lifetime. Since most small flows complete their transfers during the slow-start phase, what we essentially require is the probability that a TCP flow of size y in slow-start will face a loss, PfS (y) = 1 − (1 − PbS (y − γ))

α−1 Y

(1 − PbS (2i )),

(7)

i=0

where α = blog2 (y + 1)c, and γ =



− 1.

4.2. SDI-AQM The drop-probability function in SDI-AQM becomes,  if ((k ≤ β) ∧ (x ≤ (M − k))) ∨  0 I ((k > β) ∧ ((x ≤ min(2η , M − k)))) ; = δk,x  p(x) otherwise. where p(x) is as defined in Eq. 1. The transition probability is similar to Eq. 5,  I S qi,j = bj−i 1 − δi,j−i , 0 ≤ i < M, 0 < j ≤ M, j > i.

(8)

(9)

I }’s. Let the steady-state probability for SDI-AQM be denoted as π I = {π1I , π2I , π3I , . . . , πM With this, the probability that an arriving batch of size x is dropped, ( P M I if x ≤ 2η ; −x+1 πi P Pi=M (10) PbI (x) = β M I I I otherwise. β+1 πi δi,x i=0 πi Ii>(M −x)) +

The probability that a TCP flow of size y in slow-start will face a loss, PfI (y) = 1 − (1 − PbI (y − γ))

α−1 Y

(1 − PbI (2i )),

i=0

where α = blog2 (y + 1)c, and γ = 2α − 1. 13

(11)

4.3. RED For RED, we take minth = β, and maxth = M . The drop probability, ˆ ˆ where δ(k), is a function of the average queue size k, kˆ = (1 − w)kˆ + wk,

(12)

where w is a weight parameter, and k the instantaneous queue-length. Assumption: To keep the model simple, we assume w = 1. Though this does not model RED accurately, observe that, as the drop probability for all packets of an arriving batch are the same (as in [29]), the affect is weaker than having the drop function varying for each arriving packet. The drop-probability function, ( 0 if kˆ ≤ β; R δkˆ = (13) ˆ k−β if β < kˆ ≤ M ; M −β

Then, R qi,j = bj−i (1 − δiR ),

0 ≤ i < M, 0 < j ≤ M, j > i.

(14)

Let the steady-state probabilities be denoted as π R . The probability that an arriving batch of size x is dropped, PbR (x) =

M −x X

πiR Ii<(M −x) δi + Ii≥(M −x)



(15)

i=β

The flow-blocking probability is similar as for SD-AQM policies; PfR (y)

= 1 − (1 −

PbR (y

− γ))

α−1 Y

(1 − PbR (2i )),

(16)

i=0

where α = blog2 (y + 1)c, and γ = 2α − 1. 4.4. Numerical analysis For numerical analysis we set maximum queue-length M to 100 packets. η = 4, β = 50 and φ = 4. Mean batch-size is 20 packets. The arrival and service rates, λ and µ, are set to values such that the load is equal to 0.95. Fig. 2 plots the loss probability against batch-size, for the policies. In Fig. 3, the probability that a flow is blocked is plotted against flow-size. As expected, RED does not show bias based on the batch-size, as the dropprobability function (we used) was independent of the size of arriving batch. 14

0.3

RED SDS SDI

RED 1 SDS SDI

0.2

Loss probability

Loss probability

0.25

0.15 0.1

0.1 0.01 0.001 0.0001

0.05 0

1e-05 5

10

15

20

25

30

0

Batch size (in packets)

5

10

15

20

25

30

35

Flow size (in packets)

Figure 2: Batch-loss probability

Figure 3: Flow-blocking probability

At the same time, observe that this drop function provides a good lower bound if the batch size is smaller in comparison to the buffer length. Both SDS and SDI AQM policies give lower loss probabilities to batches of small sizes (depending on η). For batch-sizes greater than 2η , SDS gives higher loss probabilities as it does not consider the size of arriving batch; whereas SDI takes the batch-size into consideration, and hence gives increasing loss probabilities with increasing size. The flow-blocking probability plot shows negligible but still higher blocking probability for flow-sizes using the SDI policy in comparison to SDS policy. The reason for this can be deducted from the plot of batch-loss probability, which shows slightly higher drop probabilities for batch-sizes less than 2η under SDI than under SDS. 5. Simulations: Goals and Settings 5.1. Goals The goal of the simulations is to evaluate the performance of the spikedetecting AQM policies, both SDS and SDI, and compare them with: 1. RED: As far as we know, there has not been a comparative study of RED using some important metrics (given below) on improving the response times of small flows, which we do her. 2. DT: A router today usually has a FCFS scheduler serving packets arriving at the drop-tail buffer, denoted as ‘DT’ here. 3. PS+PS: This policy uses a threshold θ [19], to differentially serve large and small flows (as discussed in section 2). We consider the following different metrics for our study: 15

1. 2. 3. 4.

Conditional mean completion time of small flows; Conditional mean completion time of large flows; Number of time-outs encountered by small and all flows; Number of times the congestion-windows are reduced (congestioncuts) for small flows and all flows; 5. Mean completion time for range of flow sizes; 6. Mean completion time for small flows, large flows and all flows; 7. Maximum completion time of small flows. 5.2. Settings Simulations were performed in NS-2 on a dumbbell topology as seen in Fig. 4. The bottleneck link capacity was set to 1 Gbps. Flow-sizes were taken from a mix of Exponential and Pareto distributions. More precisely, 85% of flows were generated using an Exponential distribution with a mean 20 KB; the remaining 15% are contributed by large flows using Pareto distribution with shape set to 1.1, and mean flow size set to 1 MB. During each run 20, 000 flows were generated following a Poisson process, all carried by TCP SACK version. Packet size was kept constant and equal to 1000 bytes. For post-simulation analysis, we define ‘small flow’ as a flow with size less than or equal to 20 KB, and the remaining as ‘large flows’. Here the flow-size is the size of data generated by the application, not including any header or TCP/IP information. Also note that, a small flow of 20 KB can take more than 25 packets to transfer the data, as it includes control packets (like SYN, FIN etc.) and retransmitted packets. 5.3. Parameters Spike-detecting AQM policies: For both the SDS and SDI policies, we set η to 4 and β to 200. This means that, only a flow of size greater than or equal 25 − 1 = 31 packets can face packet drops, and this can happen only when the buffer-length exceeds 200 packets (under the assumption that the queue rarely gets full to experience a tail-drop). For SDI policy, the value of φ is set to 4. The values of η and β are motivated from [8]. RED: We use the Gentle version, as it is known to be more robust to the settings of various parameters of RED3 . The value for min th is set to 200 packets, the value of β. PS+PS: The threshold θ used to differentiate between small and large flows in this policy is set to 31 packets. 3

Recommendation: http://www.icir.org/floyd/red/gentle.html

16

5.4. Scenarios We consider three scenarios: • Scenario 1: The link capacities of the source and destination nodes were all set to 100 Mbps. The delays on the links were set such that the base RTT (propagation delays) on any src-dst end-to-end path is equal to 100 ms. The size of the bottleneck queue was set to the bandwidthdelay product (BDP) for 100 ms base RTT. That is, M = BDP = 12500 packets. There were 100 node pairs. The flow arrival rate is adapted to have a packet loss-rate of around 1.2% in the DT scenario (with traditional BDP buffer size, defined as in Scenario 1 below). Note that, using the ratio of sum of source capacities to bottleneck link capacity as load is not meaningful in a closed system. • Scenario 2: Motivated by the need to experiment with small buffers in routers [30], here we set the size of the bottleneck queue to 1000 packets, i.e., less than one-tenth of the BDP used in Scenario 1. M = 1000 packets. All the other settings were same as in Scenario 1. The packet loss-rate was observed as ≈ 2.0. • Scenario 3: To study the impact on slow flows, we experiment with 10 node pairs on dumbbell topology, where the base RTT of the path connecting first node pair is 200 ms and that of the other nine (srcdst) paths connecting the remaining node pairs is 100 ms. 20,000 flows were generated in this scenario too, with 2000 flows at every source node. M = BDP = 12500 packets. The packet loss-rate was ≈ 1.6. 6. Performance Evaluation We analyze the performance of the two spike-detecting AQM policies as well as RED AQM policy and compare them with drop-tail and PS+PS. 6.1. Scenario 1: M = BDP Here the bottleneck queue-size M is set to 12500 packets. Fig. 5 gives the conditional mean completion times of flows with sizes not greater than 200 packets. All policies are seen to give lesser mean completion times for small and medium size flows in comparison to DT. Observe that for small flows (size ≤ 20 KB), PS+PS and the three AQM policies give almost the same mean completion times. Once the threshold θ is crossed, PS+PS approaches DT quickly. The AQM policies, RED, SDS and SDI, are giving much lesser response time for flows with sizes greater than the threshold, with SDS 17

Router src 1

Mean completion time (in seconds)

2.5

Router

C1

src 2 C2

C1

dst1

C2

dst2

C Cn−1

Cn−1

Bottleneck

src n-1

Cn

2

DT PS+PS RED SDS SDI

1.5

1

0.5

0

dstn-1

0

Cn

50

100

150

200

Flow sizes (in packets of 1000 B)

src n

dstn

Figure 5: Scenario 1 - Conditional mean completion time for flow-sizes ≤ 200

Figure 4: Topology

Mean completion time (in seconds)

50

DT PS+PS RED 40 SDS SDI 30

20

10

0 100

1000

10000

Flow sizes (in packets of 1000 B)

(a) Mean completion time for large flows

(b) Mean for ranges of flow sizes

Figure 6: Scenario 1

and SDI algorithms giving better performance (than RED). The improved performance of RED over PS+PS is due to the fact that in RED flows are punished only at times of congestion (queue-length > β), whereas in PS+PS all flows with sizes (even slightly) greater than θ are sent to the low-priority queue and hence served only when the high-priority queue is empty. Fig. 6(a) plots the conditional mean completion times of large flows, wherein DT is showing lower mean completion times for flows with sizes greater than (approximately) 1000 packets (1 MB). In Fig. 6(b) we see the mean values (of completion times) plotted for different range of flow sizes. The gains for small and medium size flows under policies other than DT, and in particular under the AQM policies are evident. On an average, PS+PS induces more delay on medium size flows, while AQM policies are seen to give the best performance. Flows face less number of time-outs under the AQM policies than under DT and PS+PS (as we will see in Table 1 later). Next we analyze the worst completion time of flows for a given size. 18

12

8

DT SDS SDI

Maximum completion time (in seconds)

Maximum completion time (in seconds)

14

10 8 6 4 2 0

7

PS+PS RED SDI

6 5 4 3 2 1 0

0

50

100

150

200

0

50

Flow sizes (in packets of 1000 B)

100

150

200

Flow sizes (in packets of 1000 B)

(a) DT, SDS and SDI

(b) PS+PS, RED and SDI

Figure 7: Scenario 1 - Maximum completion time

Fig. 7 plots this metric for flow-sizes less than or equal to 200 packets. For clarity, two sub-figures are given: Fig. 7(a), compares DT and the two spike-detecting AQM policies, and Fig. 7(b), compares PS+PS, RED and SDI policies. As expected, both SDS and SDI perform better than DT. Of the two, SDI gives smaller maximum completion time for small and medium size flows, as SDS may drop bursts of packets from large spikes at times of congestion (causing timeouts); whereas SDI drops only probabilistically depending on the size of the large spike to which the packet belongs. The second sub-figure, Fig. 7(b), shows that SDI outperforms not only PS+PS, but also RED. In RED, packets may be dropped (randomly) depending on the congestion level, and hence packets from flows of sizes less than a few hundreds might also be dropped causing the TCP sender to slow down and incur longer completion times. Whereas in SDI, only packets from large spikes (and hence large flows) are dropped randomly at times of congestion. Table 1: Scenario 1 - Comparison of TOs, CCs and CT s. PM DT PS RED SDS SDI

small TOs CC 792 234 341 0 0

1091 351 891 0 0

TO

sum CC

small CT

large CT

all CT

2003 5552 783 1813 745

5151 6996 8321 5495 5923

0.7624 0.3781 0.3945 0.3696 0.3797

1.7533 1.4712 1.1582 1.0674 1.0476

1.2201 0.8830 0.7473 0.6919 0.6882

Table 1 lists other metrics, supporting the arguments given above. For each policy, the table lists the number of timeouts faced by small flows (size ≤ 20 KB) in the first column and the number of congestion-window cuts encountered by small flows in the second column. A note on the second metric: small CC (standing for ‘Congestion Cuts’) gives the total number of times the small flows reduced their congestion-windows during their life19

times. The third and fourth columns are for the total number of timeouts and congestion-window cuts, respectively, faced by all flows. The mean completion times (indicated by CT ) for small, large and all flows are the remaining three metrics, in order. Between DT and PS+PS, though PS+PS brings down the number of time-outs and congestion-cuts of small flows, it does so by inflicting a higher number of time-outs and congestion-cuts on large flows. This happens as PS+PS gives strict priority to the high-priority queue where flows with sizes not greater than the threshold are served. Though RED gives lesser number of time-outs for small flows in comparison to DT, the number is still high. Small flows face neither time-outs nor congestion-cuts under SDS and SDI policies, in this scenario. This explains why these policies give the best performance in terms of the mean completion time as well the worst completion time of small flows. Note that large flows under SDS faces more number of time-outs than in RED (as bursts of packets might be dropped in the former), whereas SDI policy brings down the total number of timeouts (by randomizing drop instances). Also note that the total number of congestion-cuts under SDS and SDI policies are comparable to that under DT, while the count is much higher under PS+PS and RED. This bias against large flows under RED was also observed in [31]. The values for the remaining three metrics in the table are in line with the plots shown earlier. Observe that the SD-AQM policies also improve the mean completion times over all flows, in comparison to other policies. 6.2. Scenario 2: M = 1000

(a) for flow-sizes ≤ 200

(b) for large flows

Figure 8: Scenario 2 - Conditional mean completion time

The bottleneck queue-size is 1000 packets. Other settings were same as in 20

Maximum completion time (in seconds)

8 7

PS+PS RED SDI

6 5 4 3 2 1 0 0

50

100

150

200

Flow sizes (in packets of 1000 B)

(a) DT, SDS and SDI

(b) PS+PS, RED and SDI

Figure 9: Scenario 2 - Maximum completion time Table 2: Scenario 2 - Comparison of TOs, CCs and CT s. PM DT PS RED SDS SDI

small TOs CC 1619 1289 473 1 41

2068 1722 1026 1 72

TO

sum CC

small CT

large CT

all CT

4137 3515 1202 1806 739

10028 9794 8834 5521 5928

0.5571 0.4246 0.4113 0.3697 0.3798

1.4292 1.2836 1.2019 1.0689 1.0456

0.9599 0.8214 0.7764 0.6926 0.6873

Scenario 1. Fig. 8(a) plots the mean completion time for flow-sizes less than or equal to 200 packets. For this metric, DT gives the highest values, SDS and SDI gives the lowest, while RED and PS+PS are in between. Fig. 8(b) reveals that, this is achieved with negligible affects on the mean completion times of large flows. The average values of this metric for different flow-size ranges are plotted in Fig. 10(a). It shows the reduction in mean completion times attained by flows under SD-AQM policies. Fig. 9 (with sub-figures 9(a) and 9(b)) gives the maximum completion times. The performance of small and medium size flows are worse in Scenario 2 than in Scenario 1 under RED and PS+PS policies; whereas under SD-AQM policies the performance is relatively same. The values of other metrics given in Table 2 also back this argument. The time-outs faced by small flows and all flows in policies other than SD-AQM policies have increased considerably. Between SDS and SDI, the former gives relatively lesser number of timeouts and congestion-cuts to small flows, but by inducing much higher number of timeouts on the overall traffic. Comparing DT and PS+PS for different metrics in Table 2, though PS+PS fares better than DT, the improvement is relatively lesser in this scenario. Interestingly, SDI performs better even in scenario with small buffers in comparison to other policies, while a size-based scheduler like 21

(a) Scenario 2

(b) Scenario 3, slow flows

Figure 10: Mean completion time for ranges of flow sizes

PS+PS shows decrease in performance. 6.3. Scenario 3: Slow path, M = BDP Table 3: Scenario 3, slow flows - Comparison of TOs, CCs and CT s. PM DT PS RED SDS SDI

small TOs CC

sum TO CC

191 83 62 24 8

470 307 148 283 80

220 123 133 40 14

876 725 961 701 592

small CT

large CT

all CT

1.3587 0.8293 0.7980 0.7628 0.7433

3.4019 2.7189 2.5359 2.4653 2.2007

2.2802 1.6815 1.5818 1.5306 1.4006

Herein, we do a preliminary study to understand the affect of our policies on flows that are slow. Of the ten node pairs, the path connecting the first node pair has 200 ms of base RTT, and other node-pairs’ paths have 100 ms of base RTT. We call the flows taking the path with larger value of RTT as slow flows and the remaining as fast flows. There were 2000 slow flows and 18,000 fast flows in this experiment. Table 3 compares the performance of slow flows. We see that SDI policy gives the smallest number of time-outs and congestion-cuts not only to small flows, but to medium and large flows as well. Observe that the total number of time-outs faced by these flows in DT is reduced by more than a factor of five with SDI-AQM policy, hence giving much lesser mean completion times to large flows. For slow flows, Fig. 10(b) plots the mean completion time for range of flow-sizes. Unlike previous scenarios, it can be observed that SDI-AQM gives the least values for all flow-size ranges in this scenario. Though not given here (due to space constraints), analyses of the performance of fast flows, showed that SDI policy achieved this by slowing down the large fast flows — the number of congestion-cuts for large flows was slightly larger in SDI than in DT. 22

7. Conclusions In this work, we proposed, developed and evaluated two spike-detecting AQM policies, one dropping packets deterministically and another randomly. Different from existing works, these new policies used AQM without needing to track sizes of flows, besides working using a single queue. The analysis using Markov Chain model showed that smaller spikes (and hence small flows) face considerably less packet-losses in comparison to RED. Simulations validated this observation. Using a variety of metrics, performance of flows were analyzed under not only the spike-detecting AQM polices SDS and SDI, but also under RED, and compared against drop-tail and PS+PS. Flows achieve better performance under AQM policies, including RED, than under the size-based scheduling policy PS+PS. One reason why space-prioritization out-performs time-prioritization is because of TCP’s congestion-control mechanism. Dropping of a packet (as it happens in space-prioritization) explicitly slows down a TCP sender, forcing it to reduce the sending rate to (at least) half. On the other hand, by slowing down packet transmission (as it happens in time-prioritization), the TCP sender slows its sending rate only proportional to the delay, and keeps pumping packets at almost the same rate until the buffer overflows and packets get dropped; and hence TCP’s reaction is slower in this case, thereby affecting other flows. A detailed analysis of this is left as future work. Both the SD-AQM policies are seen to outperform the traditional droptail, PS+PS as well as RED, besides maintaining the performance in smallbuffer scenario. Since SDI is designed to drop packets from large flows that cause congestion, slow flows complete faster in SDI (than in others) when competing with fast flows (as slow flows occupy less buffer than fast flows). While analysis using most of the metrics do not show much difference between the two SD-AQM policies, observing the number of time-outs faced by flows reveals that SDI-AQM performs better. A notable disadvantage is that the spike-detecting method needs to calculate the size of the active spike whenever the queue-length exceeds β packets. An interesting direction ahead would be to explore ways to reduce this, probably by considering in-buffer drop strategies. Besides, this work did not focus on finding the optimal values of parameters η, β and φ, which is another potential work ahead. Though intuitively the SDI-AQM policy would not hurt constant bit-rate and low bandwidth-consuming flows, this could be validated with analytical or simulation-based studies in future.

23

[1] D. M. Divakaran, Using spikes to deal with elephants, in: 30th IEEE Int’l Perf. Computing and Commun. Conf. (IPCCC), 2011, pp. 1–8. [2] Y. Zhang, L. Breslau, V. Paxson, S. Shenker, On the characteristics and origins of Internet flow rates, in: SIGCOMM ’02, 2002, pp. 309–322. [3] W. John, S. Tafvelin, T. Olovsson, Trends and differences in connectionbehavior within classes of internet backbone traffic, in: PAM’08, pp. 192–201. [4] D. Collange, J.-L. Costeux, Passive Estimation of Quality of Experience, J. UCS 14 (5) (2008) 625–641. [5] L. Schrage, A proof of the optimality of the Shortest Remaining Processing Time Discipline., Operations Research (16) (1968) 687–690. [6] I. A. Rai, G. Urvoy-Keller, M. K. Vernon, E. W. Biersack, Performance analysis of LAS-based scheduling disciplines in a packet switched network, SIGMETRICS Perform. Eval. Rev. 32 (1) (2004) 106–117. [7] L. Kleinrock, Queueing Systems, Volume II: Computer Applications, Wiley Interscience, 1976. [8] D. M. Divakaran, E. Altman, P. Vicat-Blanc Primet, Size-Based FlowScheduling Using Spike-Detection, in: Proc. ASMTA 2011, pp. 331–345. [9] S. Floyd, V. Jacobson, Random early detection gateways for congestion avoidance, IEEE/ACM Trans. Netw. 1 (1993) 397–413. [10] Smitha, A. Reddy, LRU-RED: an active queue management scheme to contain high bandwidth flows at congested routers, in: GLOBECOM ’01, Vol. 4, 2001, pp. 2311–2315. [11] L. Che, B. Qiu, H. R. Wu, Improvement of LRU cache for the detection and control of long-lived high bandwidth flows, Comput. Commun. 29 (1) (2005) 103–113. [12] L. Guo, L. I. Matta, The War between Mice and Elephants, in: ICNP ’01, 2001, pp. 180–188. [13] S. B. Fred, T. Bonald, A. Proutiere, G. R´egni´e, J. W. Roberts, Statistical bandwidth sharing: a study of congestion at flow level, SIGCOMM CCR 31 (4) (2001) 111–122. [14] X. Chen, J. Heidemann, Preferential treatment for short flows to reduce web latency, Comput. Netw. 41 (6) (2003) 779–794. [15] M. Harchol-Balter, B. Schroeder, N. Bansal, M. Agrawal, Size-based scheduling to improve web performance, ACM Trans. Comput. Syst. (2003) 207–233.

24

[16] S. F. Yashkov, Processor-sharing queues: some progress in analysis, Queueing Syst. Theory Appl. 2 (1) (1987) 1–17. [17] M. Nuyens, A. Wierman, The Foreground-Background queue: A survey, Perform. Eval. 65 (3-4) (2008) 286–307. [18] I. A. Rai, E. W. Biersack, G. Urvoy-Keller, Size-based scheduling to improve the performance of short TCP flows, Network, IEEE 19 (1) (2005) 12–17. [19] K. Avrachenkovt, U. Ayesta, P. Brown, E. Nyberg, Differentiation between short and long TCP flows: predictability of the response time, in: INFOCOM 2004, Vol. 2, 2004, pp. 762 – 773 vol.2. [20] S. Aalto, U. Ayesta, Optimal scheduling of jobs with a DHR tail in the M/G/1 queue, in: ValueTools ’08, 2008, pp. 1–8. [21] L. Kleinrock, R. R. Muntz, Processor sharing queueing models of mixed scheduling disciplines for time shared system, J. ACM 19 (3) (1972) 464–482. [22] S. Aalto, U. Ayesta, Mean Delay Analysis of Multi Level Processor Sharing Disciplines, in: INFOCOM 2006, 2006, pp. 1–11. [23] T. Zseby, et al, RFC 5475: Techniques for IP Packet Selection, Network Working Group (Mar. 2009). [24] K. Psounis, A. Ghosh, B. Prabhakar, G. Wang., SIFT: A simple algorithm for tracking elephant flows, and taking advantage of power laws, in: 43rd Annual Allerton Conf. on Control, Communication and Computing, 2005. [25] D. M. Divakaran, G. Carofiglio, E. Altman, P. Primet, A Flow Scheduler Architecture, in: NETWORKING 2010, 2010, pp. 122–134. [26] C. Estan, G. Varghese, New directions in traffic measurement and accounting, SIGCOMM CCR 32 (4) (2002) 323–336. [27] J. Chen, M. Heusse, G. Urvoy-Keller, EFD: an efficient low-overhead scheduler, in: NETWORKING’11, 2011, pp. 150–163. [28] U. Bodin, O. Schelen, Drop strategies and loss-rate differentiation, in: ICNP, 2001, pp. 146–154. [29] T. Bonald, M. May, J.-C. Bolot, Analytic evaluation of RED performance, in: IEEE INFOCOM 2000, Vol. 3, 2000, pp. 1415–1424. [30] A. Vishwanath, V. Sivaraman, M. Thottan, Perspectives on router buffer sizing: recent results and open problems, SIGCOMM CCR 39 (2009) 34–39. [31] E. Altman, T. Jim´enez, Simulation analysis of RED with short lived TCP connections, Comput. Netw. 44 (2004) 631–641.

25

A Spike-Detecting AQM to deal with Elephants

Mar 20, 2012 - As mice flows do not have much data, they almost always complete in ... Therefore, in this work, we also analyze performance of the RED. AQM. ... Priority-based scheduling gives priority to packets of one type over pack-.

664KB Sizes 1 Downloads 370 Views

Recommend Documents

Using Spikes to Deal with Elephants
large flows is to use (real-time) sampling to detect large flows (thus classifying them), and use this information to perform size-based scheduling. Since the requirement here is only to differentiate between small and large flows, the sampling strat

E-Books How to Deal With Haters
Sep 26, 2014 - Internet The Two Traps When Dealing With Them Understanding Constructive Versus Destructive. Criticism Behavioral Traits of Subversive Haters Social Programming Parrots A Battle of WIllpower. Should You Cut Ties? The Types of Malicious

[PDF] DEFENSIVENESS: 10 Ways to Deal With Difficult ...
Online PDF DEFENSIVENESS: 10 Ways to Deal With Difficult People, Stop Overreacting, And Feel Less Stress ... Rules To Be Broken (Or Followed At Your Own Expense) C Kruse pdf, by C Kruse DEFENSIVENESS: 10 ..... Of course, our goal.

How To Deal With Debt Recovery in Melbourne.pdf
Page 1 of 8. o. "0. :z. us 10EE81. Eighth Semester B.E. Degree Examination, June/July 2017. Electrical Design Estimation and Costing. Time: 3 hrs. Max. Marks: 100. ote: 1.Answer FIVE full questions, selecting. at least TWO questions from each part. 2

HOW TO DEAL WITH MULTI-SOURCE DATA FOR TREE ... - Lirmm
HOW TO DEAL WITH MULTI-SOURCE DATA FOR TREE DETECTION BASED ON DEEP. LEARNING. Lionel Pibrea,e, Marc Chaumonta,b, ... preprocessing on the input data of a CNN. Index Terms— Deep Learning, Localization, Multi- ..... perform a 5-fold cross validation

How to Deal Effectively with Trademark Infringement?.pdf ...
How to Deal Effectively with Trademark Infringement?.pdf. How to Deal Effectively with Trademark Infringement?.pdf. Open. Extract. Open with. Sign In.

Best hotels in Ooty | Book a best deal with fairstay
Fairstay hotles with green surrounding in Ooty, Lets enjoy your holidays packages with fairstay at pleasing cost. Book a Best deals in Online

Get Deal With Tree Lopping In Canberra With Perfection.pdf ...
There was a problem loading more pages. Retrying... Get Deal With Tree Lopping In Canberra With Perfection.pdf. Get Deal With Tree Lopping In Canberra With ...

A robust proportional controller for AQM based on ...
b Department of Computer Science, HongKong University of Science and Technology, HongKong, China. a r t i c l e i n f o ... best tradeoff between utilization and delay. ... than RED under a wide range of traffic scenario, the major draw-.

Deal or No Deal Game.pdf
Sign in. Page. 1. /. 6. Loading… Page 1 of 6. Page 1 of 6. Page 2 of 6. Page 2 of 6. Page 3 of 6. Page 3 of 6. Deal or No Deal Game.pdf. Deal or No Deal Game.