Power Management of Packet Switches via Differentiated Delay Targets Benjamin Yolken and Nicholas Bambos Stanford University Stanford, CA 94305 {yolken,bambos}@stanford.edu

Abstract—In this paper, we explore two novel scheduling algorithms which allow for both differentiated quality-of-service (QOS) and power conservation in input-queued packet switches. At their core is the idea of a backlog target which represents the delay sensitivity of each input/output port combination. The first algorithm, target-based projective cone scheduling (TPCS), incorporates these targets into the well-studied projective cone scheduling algorithm, a generalized form of maximum weight matching (MWM). The second algorithm, average backlog scheduling (ABS), uses a ‘memory window’ to push average backlogs towards their targets. We explain the intuition behind each of these and then show, through simulation, that both exhibit high performance in terms of managing power and QOS, while simultaneously addressing these two key concerns in switches.

I. I NTRODUCTION Much research has recently addressed the problem of scheduling in input-queued packet switches. Most of this work, however, has focused on throughput and, in particular, proving that certain classes of scheduling algorithms (e.g., maximum weight matching - MWM) maximize it. [1]. Such scheduling algorithms, however, need also consider other potential concerns in switch operation. In this paper, we focus our attention on two such areas: (a) providing differentiated quality-of-service (QOS) to the switch users and (b) reducing power consumption. Little work has been done in either of these areas within the switching context. Both, however, are becoming increasingly important as the Internet becomes more heterogenous and as computing devices and data centers become increasingly power, rather than bandwidth, constrained. The first concern, quality-of-service, reflects the fact that network users are heterogenous in terms of their loads, sensitivity to delay, payment amounts, types of applications used, etc. Thus, it may not be optimal from an efficiency standpoint to provide equal service across all users. In the case of a switch, this service ‘quality’ could be quantified by average packet delays (as well as other parameters). One needs, therefore, to consider ways of scheduling so that certain users experience consistently lower delays than others. With this, it becomes possible to allocate limited switch resources in a more efficient way and eventually provide (statistical/soft) service ‘guarantees’ to users. The second concern, power control, has become increasingly important as computing devices consume higher and higher amounts of power. In the specific case of network

switches, this increasing consumption is being caused by higher line speeds and throughput requirements. These power demands not only cost money to supply, but also introduce the need for expensive and technologically challenging cooling systems in large data centers [2] and switching farms. In this paper, we propose several algorithms which simultaneously address both of these concerns. The core idea is that switch users specify average backlog targets. Users who require higher service levels (e.g., because of some delaysensitive application they are running) will presumably have lower targets than those with low service needs1 . Given these targets, the switch then schedules packets so that those are met or, at least, approximately met. Our proposed algorithms are closely related to projective cone scheduling (PCS), a generalization of maximum weight matching (MWM) [1]. Thus, one can leverage the extensive research already done on these topics to get stability guarantees, randomized approximations, and other desirable results. At the same time, moreover, backlog targets allow for a certain degree of switch power control/management. If the backlog targets are all high, then the switch can presumably ‘scale back’ service, conserving power. As these targets get lower, however, the switch is forced to provide more service to reduce delays at the cost of higher power consumption. Eventually, if the targets get low enough, the switch moves to full power and we get performance akin to standard PCS/MWM. Some prior work has examined the effects of scheduling algorithms on average delays and/or average delay bounds in packet switches [3]–[6]. Other researchers have looked at differentiated QOS in the context of discriminatory processor sharing [7]. No papers that we know of, however, have directly addressed the former topic for input-queued packet switches, especially jointly with power management. The power control/management problem, on the other hand, has been somewhat examined for packet switches [2], [8], [9]. These approaches, however, frame the problem in terms of the solution to a centrally imposed dynamic optimization, without a QOS viewpoint. Thus, the resulting backlog/power tradeoffs may not be compatible with user requirements. In addition, these papers assume that the switch can operate in various speedup modes, a feature not required in our model. 1 The related economic issues (i.e., how targets are related to utilities and payments) are not addressed in this paper for lack of space. As discussed in the conclusion, however, these are potentially rich topics for future research.

On the other hand, since these previous works are solving an optimization problem, they will presumably have better performance from a purely power-based standpoint. Actually, these different approaches are potentially complimentary; this is a topic for future research. The remainder of this paper is organized as follows. In section II, we discuss the mathematical switch model used in our analysis and simulations. We present our QOS/power control algorithms in Section III, giving the motivation for how and why they work. Section IV discusses our simulation results. Finally, we conclude and give directions for future research in section V.

Consider an N × N input-queued packet switch with N virtual output queues (VOQs) at each input port to prevent head-of-line blocking. At each time slot (t = 1, 2, . . .), the switch transmission schedule chooses a service configuration that removes packets from some subset of the VOQs and moves them to the corresponding output ports. New packets then arrive into the system and the process repeats. 2 More formally, let X(t) ∈ ZN represent the vector of packet backlogs/numbers in each of the VOQs at the beginning of the tth time slot. Although the former is a vector, for simplicity we will use the notation Xij to refer to the X component for the input port i / output port j combination. At 2 each time slot, a service configuration, S(t) ∈ ZN is chosen from S, the set of all such feasible service vectors. Within the crossbar switch context, S is usually selected to be the set of crossbar matchings, i.e. the set of all 0/1 vectors corresponding to a one-to-one mapping of input and output ports. Although this is the assumption we make in our simulations, it is not necessary for the algorithms or analysis we propose. Finally, new packets arrive into the system VOQs according to some known distribution. Representing these arrivals with 2 A(t) ∈ ZN , the complete system dynamics can be written as +

X(t + 1) = [X(t) − S(t)] + A(t)

(1)

where x+ is defined as the vector max(0, x), taken componentwise. A. Admissibility and Stability Suppose that, with probability 1, the arrival traces satisfy P∞ τ =1 A(τ ) lim =ρ (2) τ →∞ τ

2

ρ ∈ R = {ρ¯ ∈ RN ¯≤ + |ρ

X

crossbar fabric

-4 -3 -2 -1

? ? ? ?

1

2 3 4 output ports

Fig. 1. Illustration of 4 × 4 input-queued, crossbar packet switch with the port matching: 1 → 2, 2 → 4, 3 → 3, and 4 → 1.

II. S WITCH M ODEL

for some finite, well defined vector ρ ∈ R

input ports

N2

. If also

SpS

S∈S

for some weights p ≥ 0,

X S∈S

pS ≤ 1} (3)

then we say that the corresponding arrivals are admissible. In other words, the average arrival rate is covered by some convex combination of the service vectors. If the average arrivals are not in the admissibility region, R, then it is easy to see that the system will be ‘badly behaved’, with arrivals exceeding departures for one of more queues and backlogs blowing up linearly to ∞. We are interested in studying service scheduling policies that are provably rate stable, i.e. those for which lim

τ →∞

X(τ ) =0 τ

(4)

under any admissible arrival stream. If the latter holds, then the average departure rate of packets from each VOQ is equal to the average arrival rate (please see [10] for more details on this framework). Thus, the system is ‘well behaved’ from a queueing standpoint and we can begin to study issues related to user QOS. III. A LGORITHM D ESIGN A. Projective Cone Schedules We begin our discussion of switch scheduling by reviewing the projective cone schedule (PCS) family of algorithms [10]. This involves, at each time slot, picking the service vector that maximizes the inner product S(t) = arg max hS, BX(t)i

(5)

S∈S

where B is a fixed, N 2 × N 2 matrix that is (a) positivedefinite, (b) symmetric, and (c) has all non-positive offdiagonal elements. Note that if we take B = I, then we get the standard maximum weight matching (MWM) algorithm. PCS thus represents a generalization of the latter, allowing for different VOQs to receive different priority levels. It has been shown before [10] that the PCS algorithm is rate stable for all admissible arrival traces.

6

(2)

x2

6 

b1 Shutdown Region (3)

b2

(1) S

* (2)

x1

-

Fig. 2. Illustration of state space and controls under T-PCS algorithm. As discussed in the text, the state space can be partitioned into 3 regions: (1) all queues at or above their targets, (2) one or more queues below their targets, but some S ∈ S can still provide positive total service, and (3) the shutdown region.

B. Target-based PCS Suppose that there is a one-to-one mapping of switch users to VOQs, and that these users each have backlog targets 2 represented by the vector b ∈ RN + . By Little’s Law, the average backlog in each of the VOQs is proportional to the average delay experienced. Thus, these backlog targets can also be thought of as delay targets without significant conceptual changes. Now, consider applying the following adjustment of PCS to take into account these backlog targets: Algorithm 1 Target-based Projective Cone Scheduling (TPCS) 1: Given modified service configuration set, Sˆ = S ∪0, N 2 × N 2 matrix B satisfying PCS conditions above 2: At each time slot, chose S(t) ∈ Sˆ to maximize the inner product hS, B(X(t) − b)i At each time slot, we thus apply PCS to a translated version of the state space; we also add a ‘shutdown’ service mode into S, i.e. one under which no VOQs are serviced. Note that if we take B = I and S equal to the set crossbar service configurations, we get a version that can be called ‘T-MWM’. Also note that if b = 0, then the state space trajectory is identical to that under regular PCS (respectively, MWM). We conjecture here that the T-PCS algorithm is rate stable, as desired. In fact, because of similarities between T-PCS and PCS, one can reasonably modify existing stability proofs to cover the case of translated state spaces. We will present the detailed proof in a future paper, where space limitations allow this. C. QOS and Power Control Intuition As stated in the introduction, we seek scheduling algorithms that are not only rate stable but also control QOS and power. In the following section, we discuss how our proposed algorithm addresses both of the latter concerns.

The intuition behind the QOS performance of T-PCS is fairly straightforward. Notwithstanding arrivals and service interdependencies, the system is ‘drawn’ to the point X = b just in the sense that, in regular PCS/MWM, the system is ‘drawn’ to X = 0. Put in another way, if X > b, the T-PCS algorithm will provide service so that X is pulled closer to b. On the other hand, if X < b, then the system will ‘shutdown,’ allowing arrivals to refill the queues until some or all of the backlog targets are met. Unfortunately, however, we cannot expect backlog targets to be exactly met. This is due not only to possible burstiness in the arrivals, but, more significantly, to interdependencies in the service vector set. In a 2 × 2 crossbar switch, for example, we have S = {[1, 0, 0, 1], [0, 1, 1, 0]}. Note that the first and fourth queues (and likewise, the second and third queues) must be serviced at the same time. If the queues in either pair have very different targets and/or arrival traces, it is unlikely that all targets can be met. These interdependencies are more complicated in larger switches; however, the same general idea of being forced to over-serve or under-serve some subset of the queues still applies. Another factor possibly limiting the QOS performance of our algorithm is the lack of backlog memory. Recall that the bi values represent targets in average backlog. The T-PCS algorithm, however, just considers the current backlog at each time slot. This makes the scheduling easy to implement, but prevents the algorithm from making ‘long term’ adjustments to shape the average backlogs. In the next section, we present a separate algorithm to addresses this concern. Despite these challenges, however, it is fair to say that our algorithm will approximately meet the targets in many cases. Even if these targets are not met, our simulation results show that there is still a roughly proportional relationship between average backlog and b. Thus, it might be possible to make an affine adjustment to the bi values to make them more accurate as targets. This is a topic for future research. The power savings in our algorithm, on the other hand, come from two sources: (a) the addition of a ‘shutdown’ service vector to S and (b) the target-based translation of the state space. The effect of the former on power control is clear; what is less intuitive, however, is how the latter also helps reduce power expenditure. As a first step, note that our T-PCS algorithm partitions the state space into 3 regions: 1) X ≥ b, X 6= 0 2) Xi < bi for some i but hS, BXi > 0 for some S ∈ S 3) hS, BXi ≤ 0 ∀S ∈ S In regions (1) and (2), the switch is operating regularly at full power. In region (3), however, the switch sets S = 0 and is effectively shutdown. This partitioning is illustrated in figure 2 above. Note that, if b = 0, then region (3) contains just a single point, X = 0. As b increases, however, the shutdown region also increases in size (but still remains compact and convex). In particular, the boundary between region (3) and the union of the other two regions gets larger. Hence, we would intuitively

4.5

6

4 5

3

Average Backlog

Average Backlog

3.5

2.5 2 1.5

4

3

2

1 1 0.5 0

0

2

4

β

6

8

0

10

0

2

4.5

1.8

4

1.6

3.5

1.4

3

1.2

2.5 2

1

0.4

0.5

0.2

2

4

β

6

8

10

8

10

0.8 0.6

0

6

1

1.5

0

β

(b) VOQ (2, 4), T-PCS

Average Backlog

Average Backlog

(a) VOQ (1, 1), T-PCS

4

8

0

10

(c) VOQ (1, 1), ABS

0

2

4

β

6

(d) VOQ (2, 4), ABS

Fig. 3. Average backlog versus β in two sample VOQs under T-PCS and ABS. Each series represents a separate load level, with α increasing as the curves get higher; green dotted line is the target.

expect that, at any given time, the probability of the state ‘hitting’ this boundary and putting the switch into shutdown mode would also increase. Another way of thinking about the effects of b on power is as follows. If b = 0, and loads are sufficiently low, then we would expect the state to be near 0 most of the time. Thus, at any given time slot, it is likely that some service will be ‘wasted,’ i.e. because of service interdependencies, the switch will set Si = 1 for some queues that are currently empty. As b increases, however, X is also pushed away from 0. Thus, it is increasingly unlikely that service is ‘wasted’ in the previous way. Instead, some backlogs are pushed temporarily below their targets, a change which gives the switch potential ‘breathing room’ in future time slots. The previous discussion is obviously heuristic. Although we have observed in simulation that power is decreasing in b, and this also makes intuitive sense, the latter result requires a proof. We leave this as a task for future work.

D. Average Backlog Scheduling As discussed previously, the T-PCS algorithm has the potential drawback that backlog targets may not be exactly met. One way of improving the performance in this regard is to add memory into the system so that non-myopic backlog controls can be implemented. To this end, suppose that we have a memory window of fixed length w ≥ 1 which records the backlogs of each of the queues over the previous w time slots. Let Pt X(τ ) ¯ w (t) = τ =ω0 +1 X (6) t − ω0 where ω0 = [t−w]+ . If w = ∞, then we have infinite memory and can take ω0 = 0. Now consider the following algorithm:2

2 The

‘∗’ operator here represents componentwise multiplication.

1

α = 0.7 0.95

0.9

0.9

0.8

α = 0.5

α = 0.7 Average Power

Average Power

α = 0.9

1

α = 0.9

0.7

0.6

α = 0.5

0.5

0.85 0.8 0.75 0.7

α = 0.3

α = 0.3

0.4

0.65

0

2

4

β

6

8

10

(a) T-PCS Fig. 4.

0

2

4

β

6

8

10

(b) ABS

Average power vs. β under the T-PCS and ABS algorithms under four load levels.

Algorithm 2 Average Backlog Scheduling (ABS) 1: Given modified service configuration set, Sˆ = S ∪0, N 2 × N 2 matrix B satisfying PCS conditions above ˆ maximize the inner 2: At each time slot,  chose S(t) ∈ S to  ¯ w (t) − b) ∗ X(t) i product hS, B (X At each time slot, we thus apply a PCS-like control where the current state vector is weighted by the deviation in the average backlog. Because of the memory involved, the algorithm is now able to ‘correct’ for previous deviations in ways that T-PCS cannot. As a queue stays below its target for longer and longer periods of time, for instance, the pressure to not service this queue gets increasingly higher. Thus, the system can eventually ‘right’ itself and push this queue’s average backlog back towards its target. In terms of power, we would also expect favorable performance, with ‘shutdowns’ becoming increasingly more frequent as b increases. The intuition behind this is similar to that used for T-PCS above. In this case, however, the exact mechanics behind this are more complicated because the control chosen at each time slot is no longer just a function of the current state. This also makes proving stability harder. We believe that the latter claims are true under the appropriate conditions. Proving these, however, requires future work. IV. S IMULATED P ERFORMANCE The previous two algorithms were simulated for a 4 × 4 input-queued, crossbar switch. For simplicity, arrivals were α taken to be uniform, i.i.d Bernoulli with λij = N for α ∈ {0.3, 0.5, 0.7, 0.9}. Backlog targets were set to b = βB for β ∈ {0, 0.5, 1.0, . . . , 10.0} and random vector B sampled N2 from U[0,1] . In the case of ABS, w was set to ∞. The simulated switch was run for 125, 000 time slots under each α-β-algorithm combination. The average backlog for

each VOQ was recorded in each case. Figure 3 shows the average backlog versus target and load for two such queues: (1, 1) and (2, 4) which had Bij values of 0.42 and 0.17, respectively. Note that, in each case, the average backlog is increasing in the target value except, possibly, at values of β close to 03 . As expected, however, ABS does a much better job of actually meeting the target; T-PCS backlogs are either above or below these targets by a noticeable amount. Figure 4, on the other hand, shows the average power versus target and load for each algorithm. For simplicity, all configurations are assumed to have power 1, except for the ‘shutdown’ mode which has power 04 . The plots show that for fixed arrival load, this average power is decreasing in b, matching the intuition discussed in the previous section. Moreover, this decrease becomes more significant as load decreases; under higher loads, the switch is forced to operate at full power more frequently, even when the backlog targets are high. Finally, we can note that, all else being equal, T-PCS achieves larger power savings than ABS. Thus, there appears to be a tradeoff between power and target fidelity; ABS more accurately meets the targets but requires more power to do so. V. C ONCLUSION In this paper, we have thus described two new switch scheduling algorithms, showing through simulation that both have the potential to: (a) allow users to ‘shape’ their packet backlogs / delays and (b) reduce power expenditure in the switch. We believe that this opens the door to many promising areas of future study; these include a more rigorous approach to the explanations of power savings and target attainability, the use of bidding mechanisms in the assignment of user 3 At these β’s, some or all of the queues cannot meet their targets, even if the switch continuously operates at full power. Thus, we see some ‘flattening’ or even decrease in the average backlog with β. 4 ‘Average power’ in this case is equivalent to 1 minus the shutdown frequency.

‘targets,’ and performance studies of our algorithms under more complicated arrival and switching schemes. R EFERENCES [1] I. Keslassy and N. McKeown, ‘Analysis of scheduling algorithms that provide 100% throughput in input-queued packet switches,’ Allerton Conference on Communication, Control, and Computing, Allerton, IL, Oct. 2001. [2] L. Mastroleon, D. O’Neill, B. Yolken, and N. Bambos, ‘Power managed packet switching,’ to appear in IEEE Hot Interconnects Conference, Stanford, CA, Aug. 2007. [3] K. Ross and N. Bambos, ‘Optimizing quality of service in packet switch scheduling,’ IEEE International Conference on Communications, pp. 1986-1990, Paris, France, Jun. 2004. [4] D. Shah and M. Kopikare, ‘Delay bounds for approximate maximum weight matching algorithms for input queued switches,’ IEEE INFOCOM, pp. 1024-1031, New York, NY, Jun. 2002. [5] D. Shah and D. Wischik, ‘Optimal scheduling algorithms for inputqueued packet switches,’ IEEE INFOCOM, Barcelona, Spain, Apr. 2006. [6] E. Leonardi, M. Mellia, F. Neri, and M. Ajmone Marsan, ’Bounds on average delays and queue size averages variances in input-queued cellbased switches,’ IEEE INFOCOM, pp. 1096-1103, Anchorage, AK, Apr. 2001. [7] K. Avrachenkov, U. Ayesta, P. Brown, and R. Nunez-Queija, ‘Discriminatory processor sharing revisited,’ IEEE INFOCOM, pp. 784-795, Miami, FL, Mar. 2005. [8] N. Bambos and D. ONeill, ‘Power management of packet switch architectures with speed modes’, Allerton Conference on Communication, Control and Computing, Allerton, IL, Oct. 2003. [9] A. Dua, B. Yolken, and N. Bambos, ‘Power managed packet switching,’ to appear in IEEE International Conference on Communications, Glasgow, Scotland, Jun. 2007. [10] K. Ross and N. Bambos, ‘Local search scheduling algorithms for maximal throughput in packet switches,’ IEEE INFOCOM, pp. 11581169, Hong Kong, China, Mar. 2004.

Power Management of Packet Switches via ...

however, are becoming increasingly important as the Internet becomes more ... higher line speeds and throughput requirements. These power demands not only cost ... reduce delays at the cost of higher power consumption. Even- tually, if the ...

111KB Sizes 0 Downloads 154 Views

Recommend Documents

Power Aware Management of Packet Switches
operation of large data centers. With numerous racks of servers ... to microprocessors and hard disk drives, among other devices. [1]–[3]. Little work, however ...

Power Aware Management of Packet Switches
input-queued (IQ) switch can be scheduled to balance power consumption on the ... accounts for the power / delay tradeoff and then pose the scheduling problem in ... results in significant power savings compared to MWM, with little performance .....

Input Queued Switches: Cell Switching vs. Packet ...
switch architecture has been very attractive due to its low memory bandwidth requirements compared to ... In section II, we describe the input-queued switch architecture, the cell-based maximum weight matching (MWM) .... scheduling algorithm which is

Power Managed Packet Switching
Abstract—High power dissipation in packet switches and routers is fast turning into a key ... rate/speed at which the switch operates, in conjunction with the switch ..... cases, both PASS and PA-MWM yield power savings of. 30-40% with only a ...

Power School packet English.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Power School packet English.pdf. Power School packet English.pdf. Open. Extract. Open with. Sign In. Main me

Power School packet English.pdf
... phone calls will be. sent to contact information in school system. ○ Submit completed application to Power School Site Coordinator or school office by May 27.

Leakage power Minimization of Nanoscale Circuits via ...
power by using stack effects of serially connected devices. [2], and multiple ... technology employing STI, minimum leakage current is attained at a width given by ...

Power Efficiency and Packet Delivery Ratio Through Micro Rate ...
Apr 1, 2010 - handsets of other users through Wi-Fi technologies, with or without an ... wireless receiver for receiving data from a ?rst network and a wireless .... 3G cellular link, between cell phone 301 and cellular access point 305.

Power Efficiency and Packet Delivery Ratio Through Micro Rate ...
Apr 1, 2010 - Early use of “Wi-Fi” or wireless local area network (WLAN) devices based on the IEEE .... rate for the ?rst network; identify a second throughput rate for the second network ..... the following advantages: [0040] Compliance with ...

Multi-MW Closed Cycle MHD Nuclear Space Power Via ...
Proceedings of Nuclear and Emerging Technologies for Space 2011 ... of a high-temperature fission reactor with magnetohydrodynamic (MHD) energy conversion ... This figure drops to less than 2 kg/kWe when power output exceeds 3 MWe. ... (Xe) frozen in

Power System Voltage Regulation via STATCOM.pdf
(28). Page 3 of 11. Power System Voltage Regulation via STATCOM.pdf. Power System Voltage Regulation via STATCOM.pdf. Open. Extract. Open with. Sign In.

Power Management of Online Data-Intensive Services
ther power management of a single server component nor uncoordinated power management of multiple components provide desirable power-latency tradeoffs. We report the results of two major studies to better un- derstand the power management needs of OL

Management of Power Distribution.pdf
Page 1 of 4. No. of Printed Pages : 4 BEE-003. ADVANCED CERTIFICATE IN POWER. DISTRIBUTION MANAGEMENT. O. O Term-End Examination. O. June, 2012. BEE-003 : MANAGEMENT OF POWER. DISTRIBUTION. Time : 3 Hours Maximum Marks : 100. Note : Section - A is co

Human-mediated vegetation switches as ... - Semantic Scholar
switch can unify the study of human behaviour, vegetation processes and landscape ecology. Introduction. Human impact is now the main determinant of landscape pattern over much .... has involved a reversion to hard edges, as in the lines and grids of

E-07 -- Cam Switches 20A, 25A
NOMINAL RATING. Dimensions for Type A11, A12. Dimensions for Type A21, A22. / 45. Continuous. AC3-Power. Type. 36. 45 g. Current. 1th 2. 3 x 380V. 36. 46.

Characterization of dielectric charging in RF MEMS capacitive switches
for use in wireless communication devices such as mobile phones, ..... Technology, chapter 1, page 1, John Wiley & Sons, Inc, 2003. [2] E.K. Chan, K. Garikipati, ...

FM-Delta: Fault Management Packet Compression
of services at a higher bandwidth. Thus, fault detection will ..... supports nine possible compression levels, the graph presents the compression ratio for each of ...

Ultrafast Stark-Induced Polaritonic Switches
5 Feb 2014 - cavity polaritons can, in principle, guarantee extremely fast operation rates. For these reasons, in recent years significant efforts have been devoted to the dynamical control ..... between 0.3 and 0.7 meV, have been experimentally achi

Detecting and Correcting User Activity Switches - CiteSeerX
Feb 8, 2009 - “Resource” is an umbrella term for documents, folders, email messages, ...... (a) using a document as a template for a new activity (e.g., opening ...

Shared Resource Management via Reward Schemes
CR(f). CO(f ). ≤ c. Notice our guarantee: if the “market” converges to an equilibrium, then the total cost to society of the solution is not too far away from the total cost of the optimal solution – even had we known the consumers' valuation

Patient Packet
In case of Emergency, Contact: Relationship: Home Phone ( ) Work Phone:( ) ... GHLANDS T 425.427.0309 F 425.427.8619 [email protected].