AN SMF APPROACH TO DISTRIBUTED AVERAGE CONSENSUS IN CLUSTERED SENSOR NETWORKS Amaresh Malipatil, Yih-Fang Huang

Stefan Werner

University of Notre Dame Dept. of Electrical Engineering Notre Dame, IN 46556 amalipat,[email protected]

Helsinki University of Technology Department of Signal Processing and Acoustics Espoo, Finland [email protected]

ABSTRACT Distributed sensor networks employ multiple nodes to collectively estimate or track parameter(s) of interest without any central fusion node. Individual nodes may observe (sense) and estimate the parameter of concern as well as cooperate with other nodes to arrive at a global consensus estimate. We propose a simple heuristic algorithm using a set-membership filtering approach to adaptively determine the weights of an average consensus estimator in a clustered network. Here, all the nodes in a cluster, called clustermembers, send their estimates to a clusterhead which computes the average consensus estimate. In this approach, the nodes with low signal-to-noise ratios are tagged as noisy and their estimates are accordingly given less weight. Simulation results show the ability of the proposed scheme to effectively weigh the estimates according to their SNRs to yield performance similar to a best linear unbiased estimator. 1. INTRODUCTION Distributed complexity, scalability, fault-tolerance and other advantages have made distributed estimation a hot topic in sensor networks. The application space of distributed sensor networks (DSN) is quite large and it includes, but not limited to, environmental sensing/monitoring, emergency response, patient health-monitoring, inventory management, reconnaissance, and military communications, see, e.g., [1, 2]. In distributed sensor networks, the sensors make observations on certain parameter(s) in accordance with a specific application. The data is then processed to derive an estimate of the parameter at each node as well as through cooperation between the nodes to make a consensus estimate or decision which may dictate the due course of action. The distributed parameter estimation problem arising in DSNs calls for an efficient utilization of the network resources while meeting some application-specific performance goals. Sensor network resources typically refer to computational resources such as digital signal processors, energy sources like the batteries that power the sensor nodes, and communication

resources such as bandwidth for connecting the nodes in the network. Performance constraints could be defined by one of several metrics like bit-error-rate (BER), mean-squared error (MSE), network throughput, etc. In this paper, we employ a set-membership filtering (SMF) algorithm, specifically the set-membership normalized least-mean-squares (SM-NLMS) algorithm [3], to address the issues of computational complexity and communication overhead as an effort to improve the efficiency of resource utilization while meeting performance requirements. SMF algorithms exhibit a salient feature with which the parameter estimate is updated only when the magnitude of the error exceeds a predefined threshold, which is an indication that the observed data contain sufficient fresh information [3]-[7]. This feature makes the SMF approach a viable candidate for DSN applications where resources have to be utilized efficiently [8, 9]. Using SMF, various diffusion strategies based on selective cooperation have been proposed in [9] for distributed parameter estimation. It has been proved analytically and through simulations that cooperation among the nodes can improve estimation accuracy, stability and convergence over those of stand-alone operation [10]. Cooperation implies sharing of information among the nodes. The communication requirement among the nodes in a fully decentralized network increases rapidly as the number of sensors in the network or the density of sensors increases. A more effective approach to cooperation is via clustering of nodes, wherein a subset of nodes forms a cluster, see, e.g., [11]. Clustering serves as a hybrid between fully centralized and fully decentralized networks. It helps retain the scalability as the number of sensors grows, allows for exploiting spatial diversity in the cluster and makes routing through the network efficient. Various clustering algorithms have been proposed in the literature based on different objectives like load-balancing, energy consumption, maintenance costs, etc., see, e.g., [12, 13]. This paper considers clustered DSN with a focus on developing resource-efficient distributed average consensus. Distributed average consensus has been explored by many researchers, see, e.g., [14] and the references therein. In the sce-

nario considered here, each cluster has a set of nodes called clustermembers (CMs) and one of them is designated as a clusterhead (CH). The CH coordinates the flow of information within the cluster and among adjacent clusters. In [8], the clusters do not have a designated CH. Instead, each cluster has a dedicated hardware to process the information from its CMs. In addition, some hardware that is shared between the clusters serves to compute the parameter updates and combine them to yield a consensus estimate. In [15], the nodes fuse their estimates derived from the received data or estimates from nodes in the neighborhood to form a consensus estimate. In this paper, we assume that the CH derives the consensus estimate based on the estimates of all the CMs within its own cluster as well as the estimates of the neighboring CHs. A method to derive the consensus estimate is to find a weighted average of the estimates of the local node and those of its neighbors [16, 17]. The weights are usually derived with some pre-defined optimum criteria, e.g., minimum variance [18]. Most of the weight estimation algorithms that exist in the literature are computationally intensive, thus they can be a serious burden when energy in the network is at a premium. Employing the SMF approach, each CM sends its estimate to the CH only when the estimation error exceeds a predefined threshold which is more likely to happen in the low SNR regimes. The number of updates sent by a particular CM in the steady-state can thus be used to determine the weights of the average consensus estimator. As such, the weights are not calculated at every iteration thereby considerably reducing the complexity. In a dense cluster of nodes, since the CH receives estimates from multiple CMs, interference between the CMs can impact the performance. Due to the page limit, this issue is not addressed here, and we assume that interference is fully mitigated. This paper is organized as follows. In Section 2, we present the problem formulation and the notation that will be used. A brief description of SMF is given in Section 3. The SMNLMS algorithm is outlined in Section 4 and the weights for the average consensus estimator are also derived. Simulation results are provided in Section 6 followed by conclusions in Section 7. 2. PROBLEM FORMULATION AND NOTATION Let wo denote the N × 1 deterministic unknown parameter vector to be estimated by the sensor network. It is assumed that the network has gone through an initial phase of organizing groups of nodes into clusters and designating a clusterc head for each cluster. Let Nc and {Mi }N i=1 denote, respectively, the number of clusters and number of CMs in each − ˆ CHn (k) denote, reˆ n,j (k) and w cluster. Furthermore, let w spectively, the parameter estimate of jth CM and the consensus estimate of the CH in the nth cluster at instant k. We also denote the neighborhood of cluster n by Nn , which is the set of all clusters connected to cluster n including those of cluster n. Thus CHn shares estimates with {CHi }i∈Nn . The

average consensus estimate is calculated as follows, − ˆ CH (k) = w n

Mn 

− ˆ n,j cn,j w (k)

(1)

j=1

ˆ CHn (k) = w

1  − ˆ w (k) + Ln CHn



 ˆ CHi (k) (2) w

i∈Nn ,i=n

where cn,j are the weighting coefficients to be computed for the propercombining of the estimates and they satisfy the Mn condition j=1 cn,j = 1. In (2), Ln is the size of the cluster n’s neighborhood, Nn , which is the total number of nodes in Nn . The CH feeds back the consensus estimate to all the CMs in the cluster as well as the CHs in its neighborhood. Each CM can either directly use the CH’s estimate as the a priori estimate for its next iteration or use a weighted average of its own estimate and the CH’s estimate. 3. SET MEMBERSHIP FILTERING Set-membership filtering (SMF) algorithms are developed using a bounder-error criterion, see e.g., [3], [7]. Specifically, the objective of those algorithms is that the estimation error is bounded (in magnitude) and the error bound is usually application-specific. This type of objective is distinctively different from that of classical adaptive algorithms such as recursive least-squares (RLS) or least mean-squares (LMS) which aim to minimize either the time-average, e.g., RLS, or the ensemble average, e.g., LSM. An SMF algorithm finds ˆ such that the a set of feasible filter coefficients, namely, w, resulting estimation errors are bounded in magnitude over a model space S, that consists of all the input-desired output pairs (x, d), where x is an N -dimensional complex vector while, for simplicity, d is usually a complex scalar. At time ˆ H x(k), and instant k, the filter output is given by y(k) = w Δ ˆ = d(k) − y(k). In other words, the error is defined by ek (w) ˆ such that the filter error an SMF algorithm seeks to find w ˆ ≤γ |ek (w)|

∀(x, d) ∈ S

(3)

where γ, an upper bound on the magnitude of the filter error ek , is chosen a priori. Thus, given an input-desired output pair (x(k), d(k)) at any time instant k, we define the constraint set Hk , ˆ H x(k)| ≤ γ} ˆ ∈ CN : |d(k) − w H(k) = {w

(4)

which is the set of parameters that satisfy the error-bound specification, namely, (3) and are consistent with the inputdesired output data pair. Since (3) must be satisfied for all k, an exact membership set, defined by ψk = ∩ki=1 Hi , characterizes the set of legitimate filter coefficients that meet the bounded-error specification and the observation up to k. Note that ψk is a monotone non-increasing sequence of sets. As

such, the SMF criterion results in a region estimate defined by the feasibility set W



ˆ ∈ CN : |d − w ˆ H x|2 ≤ γ 2 } {w

(5)

(x,d)∈S

A properly chosen error bound will yield a non-empty feasibility set, i.e., the set of all legitimate estimates resulting from the entire model space S. In fact, the choice of the bound offers a convenient complexity-performance trade off. This is in contrast with RLS and LMS that provide one single point estimate at each time instant. In this sense, the SMF estimates can be less sensitive to model deviations. At any particular time instant k, if the current set of estimates ψk already satisfies the bounded-error criterion, i.e., (3), for the given input-desired output pair, then the set need not be altered and the parameter estimates are not updated. On the other hand, when the error exceeds a pre-defined threshold, it is considered that the input data contain some innovation and the membership set is updated. In short, implementation of SMF algorithms involves an innovation check on the received data, followed by the calculation of updating parameter estimates when necessary. In most applications, the updating process is needed very infrequently. This leads to the unique feature of the SMF paradigm, namely, data-dependent selective update of the parameter estimates. Furthermore, in most of the cases studied, the SMF adaptive algorithms offer estimation performance comparable to those of LMS and RLS which update parameter estimates regardless of the benefits of such updates. This feature has recently been exploited to a great advantage in the studies of distributed estimation [8, 9]. It is often impractical to find an analytic expression for ψk . Therefore, it is usually more convenient to find some analytically tractable outer bounding sets for ψk . Various SMF algorithms have been proposed in the literature, see, e.g., [3][7]. Those algorithms basically differ in the manner in which they determine the outer bounding sets. In [3], the authors offer a solution using optimal bounding spheroids along with an optimized step-size for the updates. 4. SM-NLMS DISTRIBUTED ESTIMATOR The SMF strategies for deriving average consensus are discussed here. Each CM embodies an SM-NLMS [3] adaptive filter and the CH computes a consensus estimate using the weighted average. The SM-NLMS adaptive algorithm, like most SMF algorithms, possesses the attractive features of a simple update procedure, a data-dependent step size and selective update of parameter estimates. The adaptive step size renders faster convergence comparing to the traditional NLMS algorithm. Using SM-NLMS, each CM can update its parameter estimate based on the collected observations. The parameter estimate is updated only when the estimation error

is greater than a pre-defined threshold, otherwise it is not updated. This provides a significant computational reduction resulting in energy savings which contribute toward longer lifetime of the sensor nodes. In addition, the estimates are shared with the CH only when there is an update which saves the energy required for transmission and also reduces the traffic on the network. In contrast, if estimates are shared after every iteration, it increases the chances of traffic congestion which in turn might cause undesirable delays since the packets have to be queued or retransmitted. SM-NLMS is a supervised learning algorithm implemented in each CM by the following set of equations. ˆ i,j (k − 1) = w e(k) = − ˆ i,j w (k) =

ˆ CHi (k − 1) w H ˆ i,j (k − 1)x(k) d(k) − w ˆ i,j (k − 1) + α(k)e∗ (k) w

(6) x(k) xH (k)x(k)

where ∗ denotes complex conjugation. The data-dependent step-size αk is given by  1 − γ/|e(k)|, if |e(k)| > γ α(k) = (7) 0, otherwise. ˆ CHi (k −1), for its next itEach CM uses the CH’s estimate, w − ˆ i,j eration. The updated estimate, w (k), is then transmitted to the CH which computes the consensus estimate as formulated in (1) and (2) and the process iterates. 5. SMF-BASED AVERAGE CONSENSUS ESTIMATOR: SMFACE In the SMF framework, the consensus estimate from multiple CMs requires finding the intersection of their corresponding feasibility sets. Alternatively, consensus building can be done pairwise sequentially. Either of these approaches is very computationally intensive as the number of CMs increases. In this paper, we propose a simple yet effective solution. The CH feeds back the consensus estimate to not only the CMs in its own cluster but also the CHs in its neighborhood. Therefore, any low quality estimate that is improperly weighted has the potential to degrade the quality of estimates in all the clusters in the neighborhood. In practical scenarios, the operating SNR of each CM may not be known a priori to the CH, hence it needs to be estimated adaptively using the information obtained from the CM. In an SMF-based approach, though, this can be tackled in a much simpler way. After the adaptive algorithm reaches a steady state, the CH can keep a count on the number of updates received from each CM. If the CMs have reasonably good SNR, the estimation accuracy is in the desired range defined by the error bound γ and the updates become sparse once the steady state is reached. In contrast, for a node with low SNR, the magnitude of the error frequently exceeds the threshold γ and the parameter is accordingly updated frequently. Thus, by periodically monitoring the number of updates, N ui,j , received from the j th CM in the ith

cluster, the CH can gauge the relative operating SNR of each CM in the cluster. Though, there is no analytic expression thus far in the literature relating the SNR to the number of updates, we can assume that the more number of updates from a particular CM, the lower its operating SNR is. Likewise, the estimates from CMs with lower SNRs can be given lower weights relative to the estimates from CMs that have higher SNRs. Based on this line of reasoning, we propose a general class of SMF-based average consensus estimators (SMFACE) algorithms formulated by, Mi  − − ˆ n,j ˆ CHi (k) = w ci,j w (k) (8) j=1

ci,j

=

f (N ui,j )

(9)

where f (x) is a monotonically non-increasing scalar function of x. We will study two specific cases of this function, one is linear while the other one is quadratic, i.e., ci,j ci,j ci,j

= 1/(κl + N ui,j ) SMFACE − LIN 2

= 1/(κq + N ui,j ) ci,j = Mi j=1 ci,j

(10)

SMFACE − QUAD (11) (12)

where the parameters κl and κq are introduced for numerical stability reasons and can be adjusted in accordance with the desired performance. The coefficients ci,j are recomputed periodically by accumulating the number of updates for a given interval. Simulation results using these two average consensus algorithms are given in the next section. 6. SIMULATION RESULTS AND DISCUSSION The simulation framework includes two clusters each consisting of four CMs and one CH. The deterministic unknown parameter wo is an N -dimensional vector with N = 10, normalized such that its 2-norm is unity; it is otherwise chosen randomly. The input signal is generated by passing white Gaussian noise through single pole filters whose pole locac,j=Mi are randomly generated from the unitions {βi,j }i=N i=1,j=1 form distribution (0,1). SMFACE-LIN and SMFACE-QUAD are applied to compute the coefficients of the consensus estimator by setting κl = 1 and κq = 10. NLMS is also simulated with the update equation, − ˆ i,j w (k) =

ˆ i,j (k − 1) + μe∗ (k) w

x(k) xH (k)x(k)

(13)

where the update factor μ = 0.2. The parameters were set in order to have a fair comparison in terms of steady-state error. For comparison, an optimal weight combiner (OWC) using the ideal SNRs is also used whose weights are calculated by ρi,j coi,j = Mi i = 1, · · · , Nc , j = 1, · · · , Mi (14) j=1 ρi,j where, ρi,j is the SNR at the jth CM in the ith cluster. For each monte-carlo run, 3000 input-desired output pairs were

Table 1. Percentage of updates in Case 1. Strategy Cluster 1 Cluster 2 SMNLMS-EWC/OWC 5.63% 5.77% SMFACE-LIN 5.7% 5.9% SMNLMS-QUAD 5.93% 6.06% Table 2. Percentage of updates in Case 2. Strategy Cluster 1 Cluster 2 SMNLMS-EWC 48.8% 33% SMFACE-LIN 30.37% 10.17% SMFACE-QUAD 28.07% 7.73% SMNLMS-OWC 26.73% 5.83% generated and the results were averaged over 100 monte-carlo runs. The number of updates is accumulated for every 100 input-desired output data pairs. The MSE of one of the CMs, CM1,1 , is plotted in Figure. 1 and 2 for various schemes for two different cases, Case 1 - all the CMs in both clusters have SNR of 30 dB; Case 2 - the fourth CM in the first cluster has all other√CMs is still 30 an SNR ρ1,4 = 5 dB while the SNR of√ dB. The error threshold is set as γ = 5ρ = 5 × 10−3 for SM-NLMS adaptive filters based on the assumed cluster SNR of 30 dB. If the SNR for a particular CM is low, this leads to under-bounding. As a consequence, an excessive updating may take place in steady-state low SNRs. We define equal weight combiner (EWC) as, ci,j =

1 Mi

j = 1, 2, · · · , Mi

(15)

In Case 1, SMFACE-LIN and SMFACE-QUAD perform similarly to the SMNLMS-EWC as shown in Figure. 1. As expected, all the SM-NLMS algorithms converge faster than the NLMS-OWC. Note that, since all the SNRs are equal, OWC is same as EWC. In addition to faster convergence, all the SM-NLMS algorithms required updating only about 6% of the time, see Table 1, whereas NLMS requires updating at every iteration. In Case 2, the SMNLMS-EWC performs significantly worse. It is interesting to note that, in this case, even though the bad CM belongs to Cluster 1, yet it adversely affects the performance of Cluster 2. This is attributable to the coordination between the two CH’s. In comparison, both SMFACE-LIN and SMFACE-QUAD perform very well in terms of convergence and final MSE. This corroborates our claim that the number of updates in an SMF adaptive algorithm is a reliable indicator of the operating SNR. The final MSE of SMFACE-QUAD is same as that of SMNLMSOWC while the final MSE of SMFACE-LIN is 1 dB worse than SMFACE-QUAD. The benefit of SMFACE is also shown through further reduction in the number of updates required compared to EWC as seen in Table 2. In particular, the update rate within the Cluster 2 is cut down considerably owing to the improved estimation in Cluster 1.

SMNLMS EWC SMFACE−LIN SMFACE−QUAD NLMS−OWC

−5

MSE (dB)

−10

−15

−20

−25

−30

0

200

400

600 Iteration

800

1000

1200

Fig. 1. MSE of CM1,1 for Case 1: ρi,j = 30 dB ∀i, j 0 SMNLMS EWC SMFACE−LIN SMFACE−QUAD NLMS−OWC SMNLMS−OWC

−5

MSE (dB)

−10

−15

−20

−25

−30

−35

0

500

1000

1500 Iteration

2000

2500

3000

Fig. 2. MSE of CM1,1 for Case 2: ρ1,4 = 5 dB, ρi,j = 30 dB ∀i = 1, j = 4 7. CONCLUSION We propose an SMF-based scheme for average consensus estimation in a clustered sensor network. This method follows an assumption that a clustermember sending more frequent updates than other nodes is considered to be a low SNR node. Based on this, two variations of average consensus estimators were presented. Simulation results show that the proposed method provides very good MSE performance with significantly reduced complexity compared to traditional weight estimators. 8. REFERENCES [1] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam and E. Cayirci, “A survey on sensor networks,” IEEE Communications Magazine, vol. 40, no. 4, pp. 102-114, Aug. 2002. [2] S.S. Iyengar and R.R. Brooks, Eds., Distributed Sensor Networks. New York: Chapman & Hall/CRC Press, 2005. [3] S. Gollamudi, S. Nagaraj, S. Kapoor, and Y.-F. Huang, “Setmembership filtering and a set-membership normalized LMS algorithm with an adaptive step size,” IEEE Signal Processing Lett., vol. 5, no. 5, pp. 111-114, May 1998.

[4] E. Fogel and Y.F. Huang, “On the value of information in system identification - bounded noise case,” Automatica, vol. 18, no. 2, pp. 229-238, March, 1982. [5] J.R. Deller, Jr., “Set-membership identification in digital signal processing,” IEEE ASSP Magazine, vol. 6, pp. 4-22, Oct. 1989. [6] S. Nagaraj, S. Gollamudi, S. Kapoor and Y. F. Huang, “BEACON: An adaptive set-membership filtering technique with sparse updates,” IEEE Trans. Signal Processing, vol. 47, no. 11, pp. 2928–2941, Nov. 1999. [7] P. S. R. Diniz and S. Werner, “Set-membership binormalized LMS data-reusing algorithms,” IEEE Trans. Signal Processing, vol. 51, no. 1, pp. 124–134, Jan. 2003. [8] S. Werner, M. Mohammed, Y. F. Huang and V. Koivunen, “Decentralized set-membership adaptive estimation for clustered sensor networks,” Proc. 2008 IEEE Int’l Conf. Acoustics, Speech and Signal Processing, pp. 3573 - 3576, Mar. 31 - Apr. 4, 2008. [9] S. Werner, Y. F. Huang, M. L. R. de Campos and V. Koivunen, “Distributed parameter estimation and selective cooperation,” Proc. 2009 IEEE Int’l Conf. Acoustics, Speech and Signal Processing (to appear). [10] C. G. Lopes and A. H. Sayed, ”Diffusion least-mean squares over adaptive networks formulation and performance analysis”, IEEE Trans. on Signal Processing, vol. 56., no. 7, pp. 31223136, July 2008. [11] S-H Son, M. Chiang, S. R. Kulkarni and S. C. Schwartz, “The value of clustering in distributed estimation for sensor networks,” Int’l. Conf. on Wireless Networks, Communications and Mobile Computing, vol. 2, pp. 969 - 974, June 13-16, 2005. [12] O. Younis and S. Fahmy, “HEED: A hybrid, energy-efficient, distributed clustering approach for ad hoc sensor networks,” IEEE Trans. on Mobile Computing, vol. 3, no. 4, pp. 366 - 379, Oct.-Dec. 2004. [13] J. Y. Yu and P. H. J. Chong, “3hBAC (3-hop between adjacent clusterheads): a novel non-overlapping clustering algorithm for mobile ad hoc networks,” IEEE Pacific Rim Conf. on Communications, Computers and signal Processing, vol. 1, pp. 318 321, Aug. 28-30, 2003. [14] L. Xiao, S. Boyd, and S. J. Kim, “Distributed average consensus with least-mean-square deviation,” Journal of Parallel and Distributed Computing, vol. 67, no.1, pp. 33-46, 2007. [15] R. Olfati-Saber, J.A. Fax, and R.M. Murray, ”Consensus and cooperation in networked multi-agent systems,” Proc. of the IEEE, vol 95, no. 1, pp. 215-233, Jan. 2007. [16] I. D. Schizas, A. Ribeiro and G. B. Giannakis, “Consensus in ad hoc WSNs with noisy links - Part I: distributed estimation of deterministic signals,” IEEE Trans. on Signal Processing, vol 56., no. 1, pp. 350-364, Jan. 2008. [17] R. Carli, A. Chiuso, L. Schenato and S. Zampieri, “Distributed Kalman filtering using consensus strategies,” IEEE Journal on Selected Areas in Communications vol. 26, no. 4, pp. 622 - 633, May 2008. [18] A. Speranzon, C. Fischione and K. H. Johansson, “Distributed and collaborative estimation over wireless sensor networks,” IEEE Conf. on Decision and Control, pp. 1025 - 1030, Dec. 13-15, 2006.

AN SMF APPROACH TO DISTRIBUTED AVERAGE ...

Department of Signal Processing and Acoustics. Espoo, Finland stefan.werner@tkk.fi. ABSTRACT. Distributed sensor networks employ multiple nodes to collec-.

223KB Sizes 0 Downloads 200 Views

Recommend Documents

AN SMF APPROACH TO DISTRIBUTED AVERAGE ...
advantages have made distributed estimation a hot topic in sensor networks. ... the batteries that power the sensor nodes, and communication resources such as ..... Conf. on Wireless Networks, Communications and. Mobile Computing, vol.

Accelerated Distributed Average Consensus via ...
Sep 17, 2009 - Networks Laboratory, Department of Electrical and Computer Engineering, .... connected, e.g., maximum-degree and Metropolis weights [12],. [16]. In the next ...... Foundations Computer Science, Palo Alto, CA, Nov. 1998, pp.

DISTRIBUTED AVERAGE CONSENSUS WITH ...
“best constant” [1], is to set the neighboring edge weights to a constant ... The suboptimality of the best constant ... The degree of the node i is denoted di. |Ni|.

Distributed Average Consensus Using Probabilistic ...
... applications such as data fusion and distributed coordination require distributed ..... variance, which is a topic of current exploration. Figure 3 shows the ...

Distributed Average Consensus With Dithered ... - IEEE Xplore
computation of averages of the node data over networks with band- width/power constraints or large volumes of data. Distributed averaging algorithms fail to ...

Accelerated Distributed Average Consensus Via ...
Sep 17, 2008 - Telecommunications and Signal Processing–Computer Networks Laboratory. Department of Electrical and Computer Engineering ... A. Related Work ... its powers), retain a history of all state values, and then solve a system of ..... with

Formal Approach to the Deployment of Distributed ...
and cooperation requirements to service the requests are known. ... local specifications, while accounting for the service and commu- nication ...... In academia, he investigated software validation, ... In industry, at SAP Research, Darmstadt,.

A Distributed Self-Healing Approach to Bluetooth ... - IEEE Xplore
Abstract—This paper proposes a distributed self-healing tech- nique for topology formation in dynamic Bluetooth wireless personal area networks (BT-WPANs) ...

Rates of Convergence for Distributed Average ...
Department of Electrical and Computer Engineering. McGill University, Montréal, Québec, Canada. {tuncer.aysal, mark.coates, michael.rabbat}@mcgill.ca.

A Game Theoretic Approach to Distributed Coverage of Graphs by ...
A Game Theoretic Approach to. Distributed Coverage of Graphs by. Heterogeneous Mobile Agents. A. Yasin Yazıcıo˘glu ∗ Magnus Egerstedt ∗ Jeff S. Shamma ...

Rates of Convergence for Distributed Average ...
For example, the nodes in a wireless sensor network must be synchronized in order to ...... Foundations of Computer Science, Cambridge,. MA, October 2003.

An Overview of Peak-to-Average Power Ratio Reduction ... - IJRIT
BER reduction, and their advantages and disadvantages in detail. Keywords: ... in communications which can be used in both wired and wireless environments.

An Overview of Peak-to-Average Power Ratio Reduction ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 3, Issue 3, ... BER reduction, and their advantages and disadvantages in detail. ... in communications which can be used in both wired and wireless environments.

DOWNLOAD An Interdisciplinary Approach to Early Childhood ...
Education and Care: Perspectives from Australia (Routledge Research in. Early Childhood Education) pdf by Susanne Garvis Full KINDLE ePub. Online.

Micropinion Generation: An Unsupervised Approach to ... - CiteSeerX
unsupervised, it uses a graph data structure that relies on the structural redundancies ..... For example, “Pros: battery, sound; Cons: hard disk, screen”. Since we ...

An Interpersonal Neurobiology Approach to ...
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. An Interpersonal Neurobiology Approach to Psychotherapy. Daniel J Siegel. Psychiatric Annals; Apr 2006; 36, 4; Psychology Module pg. 248 ...