Distributed learning in a Congestion Poisson Game Julio Rojas-Mora∗ , Habib B.A. Sidi∗ , Rachid El-Azouzi∗ , Yezekael Hayel∗ ∗ LIA/CERI,

Université d’Avignon, Avignon, France Abstract

This paper deals with a game theoretic model based on a Congestion Poisson Game. The novelty of our study is that we focus on a totally distributed mechanism that can be employed such that the system will converge to the Nash equilibrium. We are interested in observing the impact of arrivals and departures of players on the convergence metrics of the algorithm. To do this, we propose some adaptations of a basic reinforcement learning algorithm for which we carried out extensive simulations. Our main problems are first, how a player is aware of changes in the system state and second, what it should do in that case. We propose several approches to detect sytem state changes and we show that there is no need to restart the algorithm at each event to attain convergence to a Nash equilibrium in 80% of the time. Index Terms Poisson Games, Distributed Algorithms, Simulation, Performance Analysis.

I. I NTRODUCTION Game theory can be used to model and understand lots of complex systems. For example, it has been successfully used from several years in communication networks [16]. In this paper, we are interested in applying a specific class of games which are Poisson Games [9] to understand a problem which is related to a user-association problem in wireless networks. More precisely, we look at a distributed algorithm for this problem. Reinforcement learning techniques have been first applied in a wireless network for studying optimal power control mechanism in [6]. Their distributed algorithm is based on [12] in which authors propose a decentralized learning algorithm for Nash equilibrium. The advantage of that mechanism is twofold. First, the algorithm is fully distributed. Each agent doesn’t need a lot of information to update his decision, he needs only his perceived utility which depends on other players actions but which can be easily obtained. Second, it has been theoretically proved that this decentralized mechanism, if it converges, converges to a Nash equilibrium. Moreover, if the non-cooperative game has some properties like having a potential (like in [4]), then this algorithm necessary converges to a Nash equilibrium. Numerous applications of this algorithm or small variants have been proposed in the literature: spectrum sharing in a cognitive network [14], routing protocols in an ad hoc network [10], repartition of traffic between operators [1] or pricing [8]. Mainly, those studies consider fixed number of players. But, the algorithm takes a certain amount of time and depending on the system, the number of users playing the game can evolve very quickly. The aim of the paper is to understand how the dynamic of the players impact the performance of a distributed learning algorithm. Moreover, we propose some adaptations of a basic reinforcement learning algothirm in order to improve its performance in a dynamic environment. The paper is organized as follows. The section II presents the game theoretic model, we show the existence and non-uniqueness of a Nash equilibrium and also we explain the dynamic of the system. Our decentralized algorithm is introduced in sectionIII and its performances in a stochastic environment are described with several simulations and statistics in sectionIV. Finally, we conclude the paper in sectionV.

II. G AME MODEL AND STOCHASTIC ENVIRONMENT A. Game model Our system is modeled as a non-cooperative game between finite number of players. Each player has two strategies and the utility perceived by each player depends only on the number of players that choose the same strategy as him (inversely proportionnal to this number). This type of non-cooperative game is a congestion game. It has been firstly studied in [11] where it is proved that this game has almost one Nash equilibrium. A repartition (n∗ , N − n∗ ) is a Nash equilibrium of this game if any user has no interest to change unilaterally his decision. As the game is symmetric, no distinction is made between users, the two necessary and sufficient conditions for a repartition n∗ to be a Nash equilibrium are s1 s2 ≥ , (1) ∗ n N − n∗ + 1 and s1 s2 ≥ ∗ . (2) N − n∗ n +1 From [11], we know that there exists almost one Nash equilibrium in our congestion game. We have the following result saying that there are at most 2 Nash equilibria. Proposition 1: Our congestion game has at least one Nash equilibrium and at most two Nash equilibria. Proof: Combining the two necessary and sufficient conditions 1 and 2 we obtain:    s2  s1 ≥ s (N − n∗ + 1) ≥ s n∗ n∗ (s + s ) ≤ s N + s 1 2 1 2 1 1 n∗ N −n∗ +1 = =  s2 ∗ ≥ ∗s1 s2 (n∗ + 1) ≥ s2 (N − n∗ ) n∗ (s1 + s2 ) ≥ s1 N − s2 N −n

which will mean that:

n +1

s1 N + s2 s1 N − s2 ≤ n∗ ≤ . s1 + s2 s1 + s2

+s2 −s2 and β = ss1 1N+s , then β − α = 1. If α is not an integer, there is always an integer If we make α = ss1 1N+s 2 2 ∗ between α and β , which corresponds to n . If α is an integer, then α and β are both Nash equilibria.

Then we have proved the existence and the non-uniqueness of Nash equilibrium in our system. The aim of the paper is to propose a totally decentralized mechanism that converges to these Nash equilibrium, in a stochastic environment. One important remark is that any congestion game is a potential game (Proposition 3.1 in [15]). Potential games have lots of important properties but the most important one for us is that a basic distributed learning based on reinforcement always converge to a pure Nash equilibrium in a potential game (proposition 3 in [12]). Then, we know that, when our system is static (number of players fixed), we can use a basic reinforcement learning algorithm based on the one proposed in [12] and we are sure that this decentralized algorithm converges to a pure Nash equilibrium. But, our goal is to understand this decentralized algorithm in a stochastic environment: how the dynamic of the players impact the convergence of this algorithm. The, first of all, we define the metrics which will be used for evaluate the performance of our algorithm in the stochastic environment.

B. Metrics For the performance evaluation of the algorithm we will use some metrics that will allow us to compare the different mechanisms inside the algorithm in terms of both, convergence and computational cost. Convergence will be evaluated through the cumulative convergence to Nash equilibrium, defined by: N S Pt X i=1 ξij CCt = (100/N S) , (3) t j=1

where N S is number of independant runs made, t is the iteration of the algorithm and ξij is an indicator function that shows convergence to Nash equilibrium for simulation j at iteration i. This metric tells in the long run the proportion of time the repartition is a nash equilibrium. Computational cost will be evaluated through the percentage of users in each simulation that, at iteration t, have not reached individual convergence and, hence, are performing calculations: N S PPjt (p) X p=1 ζj U Ct = (100/N S) , (4) Pjt j=1

(p)

where Pjt is the number of users in simulation j at iteration t and ζj is an indicator function that shows if player p of simulation j is performing calculations at iteration t. Finally, a normalized entropy metric based on [17], will be calculated to see how the changes of state affect the performance of player p at iteration t: (p) Pw−1 Pwn=m+1 (m−n)·sign(u(p) t−m −ut−n ) (p) Qt

m=0

Pw

(m−u)

j=m+1 , (5) w−1 where w is the size of the window (number of slots) to evaluate the entropy for player p, upi is the normalized utility for player p at the beginning of slot t and:    1 if x > 0   sign(x) = 0 (6) if x = 0    −1 if x < 0 .

=

C. Dynamic Environment The main focus of our work was to adapt a totally decentralized algorithm to a stochastic environment of players. This point is very important for our networking scenario and architecture. We consider that mobiles are moving and enter in the coexisting area following a Poisson process with rate λ. Moreover, their sojourn time in this area of coexistence technologies is assumed to be exponentially distributed with average 1/µ. Note that the sojourn time of a mobile in the system does not depend on the throughput. Our system can be modeled as a M/M/∞ queue and then the number of mobiles in the system follows a Poisson process with average ρ = λ/µ. That kind of stochastic environment has been considered for auction mechanism in [7], [13] where users come into the system following a Poisson process and leave it after an exponentially distributed sojourn time. They have shown that this stochastic environment induces very different results in the auction process. Our work is related to Poisson games [9] in which the number of players is a random variable following a Poisson process. Those games have at least one Nash equilibrium by applying Kakutani fixed-point theorem when actions set and types of players are finite and utility functions are bounded. The main difference with our work is that in [9], the number of player, which is a Poisson random variable, is a common knowledge for every player.

The dynamic of our system is relatively simple as it follows a Markov process. We know that the time duration that the system stays in a state, that means the number of players is constant, follows an exponential distribution with rate λ + µ. Then, if we keep using a basic reinforcement learning algorithm like the one proposed in [12], we would like that all individual converge before any change of state. Thus, the average number of iterations that are needed for an individual to converge should be less than 1/(λ + µ). In the next section, we describe one basic reinforcement learning algorithm proposed in the literature and we present some modifications in order to adapt this algorithm to a stochastic environment. III. A LGORITHM In [12] we can found the original algorithm we are going to use as base for our work. This algorithm was proved, for a fixed number of players, that if it converges, it will always do to a Nash Equilibrium. In our model, we have considered that users arrive and depart following an exponential distribution, so the number of players (the state of the system) is dynamic. One of the main objectives is to confirm whether this algorithm, with few changes, can be used to find the Nash equilibrium in dynamic conditions or even that the system is at the Nash equilibrium as much as possible. The second one is to make it as distributed as possible, meaning that we would like to get from the base stations just the essential information on the state of the system or even no information. The result is presented in Algorithm 1, in which we use the idea of individual convergence, taken from [4]. The algorithm is based on an reinforcement of mixed strategies with utility obtained by playing pure strategies. We consider a (p) set of Nt = {nct 0 , nct 1 } players performing different actions, over the set C = {c0 , c1 } at time t. Let βt be the (p) (p) probability of each player p ∈ P choosing the pure strategy ct at step t. The choice ct performed by user p at step t will result in an individual utility given by: (p)

ut

 (p)  c = sc(p) / nt t · max (S) , t

where sc(p) is a constant in S = {sc0 , sc1 }. Then, each player updates his probability according to the rule: t   (p) (p) (p) βt = βt−1 + b · 1{c(p) =c1 } − βt−1 · upt . t

(7)

(8)

Calculations1 are carried out by each user until a threshold  on consecutive results of probability β is not surpassed, which means that the user has no incentive on changing strategies. Then, after reaching this threshold, no further calculations are carried out by this player. Nonetheless, there are cases when a player which has converged needs to restart calculating as the number of players is time evolving. In [4], even though it is not explicitly stated, it can be inferred that information about arrivals and departures is distributed to each user from a centralized entity like the base station. When this information reaches a user that has converged, his probabilities are reseted to levels that give more weight to the previously selected strategy. Instead of having this information broadcasted by the base station, two heuristic approaches which compare the utility obtained throughout a time window with a distinctive pattern were tested. This pattern will signal a change in state of the system, so, if it is found, users will restart their probability at a given distance α of their convergence side. Depending on the size of the window used, false positives are more or less frequently signaled. We also have to set the policy that establishes the amount of players to be restarted when a change of state is detected (see section IV-A). 1

Probability updating.

Algorithm 1 Dynamic Distributed Algorithm. (p)

1) Initialize βt−1 as starting probability for new players in P . 2) For each player p: a) If player p has converged and restarting conditions are met, then: (p)

(p)

i) If βt−1 ≈ 1 set βt−1 = 1 − α. (p) (p) ii) If βt−1 ≈ 0 set βt−1 = α. p b) Player p performs a choice, over C , according to βt−1 . p c) Player p updates his probability βt according to his choice using (8). (p) (p) d) If βt − βt−1 <  then player p has converged.

3) Remove players that departed, make t = t + 1 and go to step 1.

IV. S IMULATION The simulation scenario is composed of 120 simulations, one for each set of conditions. For each simulation, 25 independent runs were made. Each independent run was composed of 2500 iterations of the algorithm. The arrival and departure process can be modeled as an M/M/∞ queue, where ρ will be the average number of users in the system. For all simulations we will use ρ = 5 = 3000/600. We will start simulations with 5 users. We consider the set S = {s1 = 1, s2 = 3}. We have picked an acceleration parameter b = 0.3, a convergence threshold  = 10−4 and restarting probability α = 0.3. A. Simulation scenario This simulation scenario has the following structure: 1) Change of state detection (Case): Case 1 - Original algorithm: as control case we used the original algorithm from [12], in which changes of state are not detected and individual users never stop calculating because convergence is taken globally. Cases 2, 3, 4 and 5 - Pattern of n-iteration memory: we have used predefined patterns to detect the case of arrivals. We start from the most recent point we would like to know if there has been a drop on the performance obtained n slots before. We would like to know if this drop in performance has been recurrent. For departures we have used the same strategy, but with the mirror pattern, i.e., “lows” become “highs” and viceversa. We evaluated patterns for 1, 2, 5 and 10 iteration memory. Case 6 - No restarting after individual convergence: there is no restarting for changes of state. Case 7 - Restarting on changes of state: information about changes of state is broadcasted by the base station to each user that has converged so they can restart calculations. Case 8 - Entropy with w = 5: players that have already converged evaluate changes in the system detection (p) through (5). If Qt > τ , ∀τ ∈ [0, 1) a change of the state has been detected an the user should restart. In our simulation we have used τ = 0.8. 2) Starting probabilities for new users were set according to two strategies: a) using random values from a uniform distribution ; and b) using always 0.5. 3) Users that needed to restart calculations followed one of two strategies: a) restarting all of them; and b) restarting them with probability 1/Pjt .

CC at t=2500 3

4

30 40 50 60 70 80 90

CC at t=2500

30 40 50 60 70 80 90

2

5

1

CC at t=2500 3

4

7

8

5

Case (Restng. Conv. Users with Prob. 1/Pjt)

Fig. 1: Distribution of CC2500 for simulations of cases 2, 3, 4 and 5.

30 40 50 60 70 80 90

CC at t=2500

2

6

Case (Restng All Conv. Users)

30 40 50 60 70 80 90

Case (Restng All Conv. Users)

1

6

7

8

Case (Restng. Conv. Users with Prob. 1/Pjt)

Fig. 2: Distribution of CC2500 for simulations of cases 1, 6, 7 and 8.

B. Results and analysis In Figure 1 we plotted the distribution of cumulative convergence at t = 2500 (CC2500 ) for the independent runs made of simulation cases 2, 3, 4 and 5, with ρ = 3000/600, being this the midlevel rate we studied. On the top plot we have simulations when all the users that converged restart after a change of state detection, and on the bottom plot we have those where users only restart with probability 1/Pjt . The left (white) side of each case shows the results for random starting probability used for new players and the right (black) side those when new players use 0.5 as starting probability. The black dotted line shows the global average, while the thick black lines show the average for that particular simulation. As we can see, averages on the left (white) side of the plot are always smaller than those on the right (black) side, meaning that fixing starting probability for new users with a value of 0.5 leads to better convergence rates. Also, we can see that the bigger (case 2 is the smallest, case 5 is the biggest) the pattern used for a change of state, the better cumulative convergence levels we achieve. This plot leads us to use only case 5 for further analysis. Figure 2 shows us the same plot but with cases 1, 6, 7 and 8. Again, performance obtained when assigning new players a random starting probability is always worst than when they are assigned a fixed starting probability. This will lead us to remove the case of random starting probabilities from further consideration. On the other hand, this plot would not allow us to discard any other simulation case. In Figures 3 and 4 we present the average over the 25 independent runs for both CCt (dashed lines) and U Ct (solid lines) for cases 1, 5, 6 and 7. We have also plotted the strategies for restarting users as red lines, when we restart all converging users, or as blue lines, when they are restarted with probability 1/Pjt . As we can see, there are not significative differences in cumulative convergence for the different cases, but case 6 achieves the same level of cumulative convergence with a lower proportion of users calculating at any given time. This means that after users select one of the two technologies, they should keep their choice for as long as their call last. This can pose a problem at departures when both the service and arrival rates are small, because converging users will not

Fig. 3: Evolution of CCt (dashed lines) and U Ct (solid lines) for ρ = 3000/600 and starting probability fixed at 0.5. No restarting strategies are considered for these cases. (a) Case 1. 100 80 60 0

20

40

CC/UC

60 40 0

20

CC/UC

80

100

(b) Case 6.

0 1000

3000

5000

0 1000

Iterations

3000

5000

Iterations

Fig. 4: Evolution of CCt (dashed lines) and U Ct (solid lines) for ρ = 3000/600 and starting probability fixed at 0.5. The strategies for restarting users are shown as red lines, when we restart all converging users, or as blue lines, when they are restarted with probability 1/Pjt .

80 60 0

20

40

CC/UC

60 40 0

20

CC/UC

80

100

(b) Case 7.

100

(a) Case 5.

0 1000

3000

5000

Iterations

0 1000

3000

5000

Iterations

be restarting and they could end in a partition that is not a Nash equilibrium for a long time. Nonetheless. this might be a good deal to have if arrival rates are high, as this will keep an steady flow of new users who will always move the partition to a Nash equilibrium. V. C ONCLUSIONS From our simulations we can conclude that we can follow the algorithm in [12], originally developed for a fixed state (number of players), and use it for a dynamic environment (variable number of players due to a Poisson process with exponentially distributed departures) with excellent levels of convergence to the Nash equilibrium (≈80%) and fast achievement of stability. The best strategy to follow for arrivals and departures seems to be keeping the probabilities achieved for the previous state and using them as starting probabilities for the new state, leaving players that already reached convergence in the strategy previously selected. This strategy doesn’t affect

convergence and reduces the proportion of users calculating to achieve the Nash equilibrium. We would like to study more in details the convergence rate of the distributed algorithm depending on the parameters of the learning process and especially on the number of players. R EFERENCES [1] D. Barth, L. Echabbi, , and C. Hamlaoui. Optimal transit price negotiation: The distributed learning perspectives. Journal of Universal Computer Science, 14(5):745–765, 2008. [2] L. Berlemann, C. Hoymann, G. Hiertz, and S. Mangold. Coexistence and Interworking of IEEE 802.16 and 802.11(e). In Proceedings of VTC 2006, 2006. [3] Dimitri Bertsekas and Robert Gallager. Data Networks. Prentice Hall, 1987. [4] Pierre Coucheney, Corinne Touati, and Bruno Gaujal. Fair and efficient user-network association algorithm for multi-technology wireless networks. In Proceedings of INFOCOM 2009 Mini Conference, 2009. [5] X. Jing, S. Mau, and R. Matyas. Reactive Cognitive Radio Algorithms for Co-Existence between IEEE 802.11b and 802.16a Networks. In Proceedings of Globecom 2005, 2005. [6] S. Kiran and R. Chandramouli. An adaptive energy efficient link layer protocol for bursty transmissions over wireless data networks. In Proceedings of ICC 2003, 2003. [7] P. Maille and B. Tuffin. The progressive second price mechanism in a stochastic environment. Netnomics, 5:119–147, 2003. [8] P. Maille, B. Tuffin, Y. Xing, and R. Chandramouli. User Stragegy Learning when Pricing a RED Buffer. Simulation Modelling, Practice and Theory, 17(5):548–557, March 2009. [9] R. Myerson. Population uncertainty and poisson games. International Journal of Game Theory, 27, 1998. [10] V. Raghunathan and P. Kumar. On Delay-Adaptive Routing in Wireless Networks. In Proceedings of CDC 2004, 2004. [11] Rosenthal, Robert W. A class of games possessing pure-strategy Nash equilibria. In International Journal of Game Theory, vol. 2, num. 1, 1973. [12] P.S. Sastry, V.V. Phansalkar, and M.A.L. Thathachar. Decentralized learning of nash equilibria in multi-person stochastic games with incomplete information. IEEE Transactions on Systems, Man and Cybernetics, 24(5):769–777, May 1994. [13] D. Stahl. The inefficiency of first and second price auctions in dynamic stochastic environments. Netnomics, 4:1–18, 2002. [14] Y. Xing and R. Chandramouli. QoS Constrained Secondary Spectrum Sharing. In Proceerdings of DySPAN 2005, 2005. [15] D. Monderey, L. Shapley, Potential Games, in Game and Ecobnomic Behavior, vol. 14, pp. 124-143, 1996. [16] E. Altman, T. Boulogne, R. ElAzouzi, T. Jimenez, L. Wynter, A survey on networking games in telecommunications, in Computers and Operations Research, vol. 3 no. 2, pp. 124-143, 2006. [17] S. Wang and Z. Dou. Exclusive trend detector with variable observation windows for signal detection. Electronics Letters, 33(17):1433– 1435, August 1997.

Distributed learning in a Congestion Poisson Game

An adaptive energy efficient link layer protocol for bursty transmissions over wireless data networks. In Proceedings of ICC 2003, 2003. [7] P. Maille and B. Tuffin.

977KB Sizes 0 Downloads 178 Views

Recommend Documents

Distributed learning in a Congestion Poisson Game
techniques have been first applied in a wireless network for studying optimal power .... of coexistence technologies is assumed to be exponentially distributed with .... 3: Evolution of CCt (dashed lines) and UCt (solid lines) for ρ = 3000/600 and .

Experiments in learning distributed control for a ... - Semantic Scholar
Aug 28, 2006 - Institute for Aerospace Studies, University of Toronto, 4925 Dufferin .... This may be done by way of a set of basis behaviours ..... We call the modified .... ings of IEEE International Conference on Evolutionary Computation,.

Information Aggregation in Poisson-Elections
Nov 28, 2016 - The modern Condorcet jury theorem states that under weak conditions, elections will aggregate information when the population is large, ...

A Game Theoretic Approach to Distributed Coverage of Graphs by ...
A Game Theoretic Approach to. Distributed Coverage of Graphs by. Heterogeneous Mobile Agents. A. Yasin Yazıcıo˘glu ∗ Magnus Egerstedt ∗ Jeff S. Shamma ...

Revisiting TCP Congestion Control in A Virtual Cluster ...
Cloud computing allows users to hire a cluster of VMs in an on-demand fashion. To improve the cost-effectiveness of their cloud platforms, cloud providers strive ...

A Novel Technique to Control Congestion in MANET using ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 1, Issue .... Tech degree in from DAV, Jalandhar and completed B-Tech in 2005 with ...

learning distributed power allocation policies in mimo ...
nt . Note that the Kronecker propa- gation model ( where the channel matrices are of the form. Hk = R. 1/2 k. ˜ΘkT. 1/2 k. ) is a special case of the UIU model. The.

A Novel Technique to Control Congestion in MANET using ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 1, Issue 7, ... topology. 2. Congestion Control in MANET. To maintain and allocate network .... Tech degree in from DAV, Jalandhar and completed B-Tech in 2005 with honours fro

Dynamics in Congestion Games
when the system does operate near equilibrium in the pres- ence of dynamics ... networks formed out of independent, autonomous or non- engineered agents.

Idest: Learning a Distributed Representation for ... - Research at Google
May 31, 2015 - Natural Language. Engineering, 7(4):343–360. Mausam, M. Schmitz, R. Bart, S. Soderland &. O. Etzioni (2012). Open language learning for in-.

Welfare Maximization in Congestion Games
We also describe an important and useful connection between congestion .... economic literature on network formation and group formation (see, e.g., [22, 13]). .... As a last step, we check that this result cannot be outperformed by a trivial.

Sparse Distributed Learning Based on Diffusion Adaptation
results illustrate the advantage of the proposed filters for sparse data recovery. ... tive radio [45], and spectrum estimation in wireless sensor net- works [46].

A Reliable, Congestion-Controlled Multicast Transport ... - CiteSeerX
Ad hoc networks, reliable multicast transport, congestion control, mobile computing ... dia services such as data dissemination and teleconferencing a key application .... IEEE 802.11 DCF (Distributed Coordinate Function) [5] as the underlying ...

A distributed system architecture for a distributed ...
Advances in communications technology, development of powerful desktop workstations, and increased user demands for sophisticated applications are rapidly changing computing from a traditional centralized model to a distributed one. The tools and ser

Inverse Game Theory: Learning Utilities in Succinct ...
Feb 12, 2016 - perspective,for the class of succinct games we show that finding a set (not necessarily the most likely) of utilities is .... Correlated equilibria in the form of a PMP exist in every game and can be computed ..... The Inverse-Utility

traffic congestion pdf
Page 1 of 1. File: Traffic congestion pdf. Download now. Click here if your download doesn't start automatically. Page 1 of 1. traffic congestion pdf.

Poisson, An Advanced Course in General Relativity.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Poisson, An ...

Game Theory and Distributed Control 1 Introduction
Jul 9, 2012 - The actions are which files to store locally under limited storage capacity. The global objective is to service local file requests from users while minimizing peer-to-peer content requests [19]. • Ad hoc networks: The agents are mobi

Distributed Electronic Rights in JavaScript
any powerful references by default; any references it has implicit access to, such as ..... approach solves the open access problem by restricting access to members and regu- ..... Journal of Applied Corporate Finance 8(2), 4–18 (1995). 15.

A singularly perturbed Dirichlet problem for the Poisson ...
[8] M. Dalla Riva and M. Lanza de Cristoforis, A singularly perturbed nonlinear trac- tion boundary value problem for linearized elastostatics. A functional analytic approach. Analysis (Munich) 30 (2010), 67–92. [9] M. Dalla Riva, M. Lanza de Crist

traffic congestion pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. traffic ...

Forecasting transmission congestion
daily prices, reservoir levels and transmission congestion data to model the daily NO1 price. There is ..... This combination is inspired by a proposal by Bates and Granger (1969), who used the mean ..... Data Mining Techniques. Wiley.

44-Congestion Studies.pdf
weight of the vehicle and as the vehicle moves forward, the deflection corrects itself to its. original position. • Vehicle maintenance costs; 'Wear and tear' on ...

Poisson 1827.pdf
Page 1 of 4. ANNALES_. DE. CHIMIE ET DE PHYSIQUE,. Par MM. GAY-LUSSAC et AHAGO. TOME TRENTE-CINQULEME. A PARIS_. Chez CROCIJ:ARD. Libra ire, cloitre Saint-Benoit. D0 I(>' . pres Ia rue des Mathnrins. 18 2 7·. Page 1 of 4 ...