Optimal Sensor Placement with a Statistical Criterion for Subspace-Based Damage Detection
Michael Döhlera, Kenny Kwanb, Dionisio Bernalc Northeastern University, Department of Civil and Environmental Engineering, Center for Digital Signal Processing, 360 Huntington Avenue, Boston, MA 02115, USA a
[email protected], b
[email protected], c
[email protected]
Abstract Subspace-based fault detection algorithms have proven to be efficient for the detection of changes in the modal parameters for damage detection of vibrating structures. With these algorithms, a state-space model from the reference condition of a structure is confronted to output-only vibration data from a possibly damaged condition in a chi2 test on a damage detection residual. The outcome of this test is compared to a threshold to decide if there is damage or not. In this paper, the problem of optimal sensor placement for this damage detection algorithm is considered based on the statistical properties of the chi2 test. Using a model of the structure, sensor positions are chosen such that the non-centrality parameter of the chi2 distribution is maximized for a certain set of damages. It is anticipated that this approach would, indirectly, lead to a maximization of the power of the test. The efficiency of the approach is shown in numerical simulations. Keywords: Damage detection, Subspace methods, Optimal sensor placement, Fisher information, Statistical power
1. Introduction Optimal sensor placement is an important issue in the dynamic assessment of mechanical or civil structures in order to measure the most significant information for a specific objective. This subject has received considerable attention in the literature and solutions have been presented for different problems. Many works aim at optimal system identification performance, e.g. with the target of identifying mode shapes that are as linearly independent as possible, yielding maximal signal strength of the modal responses or maximizing the kinetic energy of the structural system [1], [2]. Closely related are entropy-based methods, where the sensor locations are chosen to minimize the uncertainty in the desired estimates or, equivalently, the information entropy [3]. A common methodology in these methods is to select the sensor layout that maximizes the Fisher information of a desired quantity in the measured data, where usually strong simplifications of the statistical properties are made. Other methods optimize e.g. observability or controllability measures [4], [5]. An overview of some of these methods with applications can be found in [6], [7], [8]. In this work, the optimal sensor placement for a subspace-based damage detection algorithm [9], [10], [11], [12] is considered. With this algorithm, parameters from the (healthy) reference state of a structure are confronted to output-only vibration data from a state that is tested to be healthy or damaged. In the tested state no system identification step is necessary, but instead the measured data is processed directly. A comparison of the measured data to the reference parameters is performed using a χ2 test, which is compared to a threshold. The criterion for optimal sensor placement is the optimal detectability of damages, which corresponds to the selection of a sensor layout that maximizes the power of the damage detection test (defined as the probability of detecting damage when it is present). Based on the strategy in [13], an
algorithm is developed for the selection of an optimal sensor placement for a fixed number of sensors. Also, a strategy is proposed to validate the obtained results. This paper is organized as follows. In Section 2, the subspace-based damage detection test is recalled. In Section 3, the strategy for an optimal sensor placement is developed and results on a numerical simulation are shown in Section 4.
2. Subspace-Based Damage Detection 2.1 Models and Parameters The use of the state-space representation for output-only vibration-based structural monitoring is well-established, which corresponds to monitoring the eigenstructure of the discrete time model xk + 1 = A xk + vk yk = Cxk + wk
(1)
with the states xk ∈ ℝ n , the measured outputs yk ∈ ℝ r , the state transition matrix A ∈ ℝ n×n and the observation matrix C ∈ ℝ r ×n , where r is the number of sensors and n is the system order. The excitation vk is an unmeasured Gaussian white noise sequence and wk is the measurement noise.
The collection of eigenvalues and mode shapes (λ , ϕ ) comprising the eigenstructure of system (1) results from
det( A − λi I ) = 0, Aφi = λiφi , ϕi = Cφi , where λi and φi are the eigenvalues and eigenvectors of A, and ϕi are the corresponding mode shapes. The eigenstructure (λ , ϕ ) is a canonical parameterization of system (1) and considered as the system parameter θ = ϑ with Λ , ve c( Φ )
ϑ =
(2)
where Λ = [λ1 … λn ]T is the vector containing all eigenvalues, Φ = [ϕ1 … ϕn ] is the matrix whose columns are the mode shapes and vec denotes the vectorization operator. 2.2 The Subspace-Based Damage Detection Algorithm In [9], [10] a residual function was proposed to detect changes in the eigenstructure θ from the measurements yk without actually identifying the eigenstructure in the possibly damaged state. The considered residual is associated with a covariancedriven output-only subspace identification algorithm. Let Ri = E( yk ykT−i ) be the theoretic output covariances and
Hp +1, q
R1 R 2 = ⋮ R p +1
R2 R3 ⋮ Rp+2
…
Rq … Rq +1 = Hank( Ri ) ⋱ ⋮ … R p + q
be the theoretic block Hankel matrix. It possesses the well-known factorization property Hp +1, q = O p +1 Cq with observability matrix
O p +1
C CA = ⋮ p CA
and controllability matrix Cq . Denote the system parameter in a (healthy) reference state as θ0 and in the tested state of the system as θ. The residual function for a damage detection test from [9], [10] compares the system parameter θ0 to data measured from the system corresponding to θ, while the parameter θ itself is not identified. In the reference state, the observability matrix Op +1 (θ0 ) is obtained in the modal basis ( C = Φ , A = Λ ) and its left null space S(θ0) is computed, e.g. by a singular value decomposition of Op +1 (θ0 ) , such that S (θ 0 )T O p +1 (θ0 ) = 0 and thus S (θ0 )T Hp +1, q = 0
in the reference condition. Using measured data ( yk ) k =1,…, N , a consistent estimate H p +1, q = Hank( Rˆi ) is obtained from the empirical output covariances Rˆi =
1 N
∑
N y yT k =1 k k −i
. To decide whether the measured data correspond to θ0 or not, the residual
vector ζ N with
(
ζ N = N vec S (θ 0 )T H p +1, q
)
(3)
is defined. It is tested if this residual function is significantly different from zero or not, corresponding to a test between the hypotheses [9] H0 : θ = θ0
(reference system),
H1 : θ = θ 0 + δθ / N
(damaged system),
where δθ is unknown but fixed. With this statistical framework, very small changes in the parameter θ can be detected if the number of data samples N is large enough. The residual function is asymptotically normal distributed [9]. Let J be its asymptotic sensitivity w.r.t. θ0 and Σ its asymptotic covariance with consistent estimates J and Σˆ , respectively, which are described in detail in [11], [12]. A decision between the hypotheses H0 and H1 is achieved through a generalized likelihood ratio (GLR) test, amounting to
(
T
χ N2 = ζ NT Σˆ −1 J J Σˆ −1 J
)
−1
T
J Σˆ −1ζ N ,
(4)
which is compared to a threshold that is set up in the reference condition for a desired type I error. The test variable χ N2 is asymptotically χ 2 -distributed with rank(J ) degrees of freedom and non-centrality parameter γ = δθ T F δθ , where
F = J T Σ−1J
(5)
is the asymptotic Fisher information on θ0 contained in ζ N . 2.3 Power of the Test The quality of the damage detection test is determined by the “power of the test” π, which is the probability that the test classifies data from the damaged system correctly as damaged for a given type I error α, e.g. α = 5%. The power of the test is the complement of the resulting type II error β, i.e. π = 1 – β. It is high when the overlap between the distribution of the
reference state and the damaged state is low. The overlap of both distributions for a particular damage is determined by the non-centrality parameter γ = δθ T F δθ and the objective of an optimal sensor placement for the considered damage detection method is to maximize γ. Both distributions with the type I and type II errors are illustrated in Fig. 1.
Fig. 1 χ2 distributions in reference and damaged states for 15 degrees of freedom, γ = 35 in the damaged state and α = 5%.
3. Optimal Sensor Placement for Subspace-Based Damage Detection The objective for optimal sensor placement is to optimize the damage detection performance with the described subspacebased algorithm. Thus, the criterion for an optimal sensor placement is the optimal detectability of damages, corresponding to the selection of a sensor layout that maximizes the power of the test and thus the non-centrality parameter γ for a certain set of damages. All necessary computations involve a model of the investigated structure only from the reference state. 3.1 Maximization of Non-Centrality Parameter and Impact of Parameterization The non-centrality parameter γ depends on the particular change δθ of the system parameter vector θ. While θ was chosen as the collection of eigenvalues and mode shapes of the system (1) in (2), it can be basically any kind of parameter vector with a relation θ = θ0 in the reference state and θ ≠ θ0 in the damaged state. Let the dimension of the parameterization be m, i.e. δθ ∈ ℝ m . Then, as shown in [13], for changes δθ of constant norm it holds that
∫ δθ
γ d(δθ ) = =1
∫ δθ
δθ T F δθ d(δθ ) = =1
cm tr( F ) m
(6)
where cm is area of the unit sphere in ℝm and tr(·) denotes the trace of a matrix (sum of the entries on its diagonal). Thus, the mean value of the non-centrality parameter γ for changes in the system parameter vector of unit norm is proportional to tr(F). Note that as γ is averaged for all unit parameter changes in (6), the parameterization θ needs to be chosen such that unit changes in each of its elements have the same importance for the objective of damage detection. For example, assume that
θ = [θ1 θ2] consists of two parameters, where θ1 is big and θ2 is small. Then, a unit change in θ1 is smaller than a unit change in θ2 in relative terms, and by maximizing tr(F) it is implied that a small change in θ1 and a big change in θ2 have the same importance for damage detection. Two comments about the particular choice of the parameterization are in order. First, the selection of tr(F) as an optimality criterion implies, due to (6), that the parameterization θ must be independent of the sensor locations – otherwise different placements would not be comparable by tr(F) in (6). For example, if mode shapes are used as parameters then θ would be changing from one position to the next making the comparison with tr(F) impossible. In [13] the eigenvectors and the eigenvalues of the discrete-time state space system were suggested as one possible parameterization, but in this case the issue was avoided because all the DOF were measured. The system matrix A was also considered as a possible parameterization in [13]. Second, while the choice of the parameterization does not change the non-centrality parameter, the unit norm optimization is such that the relative magnitude of the entries in the parameter vector determines their relative importance. The invariance of the non-centrality parameter can be seen to hold by noting that if θ = Jɶθɶ , with the asymptotic Fisher information of θɶ contained in the residual function as Fɶ = Jɶ T J T Σ−1J Jɶ , the non-centrality parameter is
γɶ = δθɶT Fɶ δθɶ = δθɶT Jɶ T J T Σ−1J Jɶ δθɶ = δθ T J T Σ−1J δθ = γ 3.2 Choice of Parameterization and Computation of F Damage is a change on the physical stiffness parameters. We choose them directly to define the parameter vector, namely, θ = {p1,…,pL}. Then, the sensitivity matrix J with respect to these parameters for computing F in (5) is given by J = J ζ ,ϑ J ϑ , µ J µ ,θ
(7)
where ϑ is the collection of eigenvalues and mode shapes of the discrete-time system and µ is the collection of eigenvalues and mode shapes of the continuous-time system. In (7) one has •
J ζ ,ϑ = sensitivity of the expectation of the residual function with respect to poles and mode shapes of the discrete-time
system. It is derived in detail in [9], [10], [12] and it holds
J ζ ,ϑ = (Op +1 (θ 0 )† Hp +1, q ⊗ S (θ 0 ) ) J O ,ϑ T
where J O ,ϑ is the derivative of the vectorized parametric observability matrix with respect to ϑ, and
(8) †
denotes the
pseudoinverse. Formulae for J O ,ϑ are given in [10], [12], •
J ϑ , µ = sensitivity of the poles of the discrete-time system with respect to the poles of the continuous-time system [10],
•
J µ ,θ = sensitivity of the poles and eigenvectors of the continuous-time system with respect to the structural parameters
[10], [14], [15]. The covariance matrix Σ in (5) of the residual function ζ N from (3) can be obtained as
Σ = ( I ⊗ S (θ 0 )T ) Σ H ( I ⊗ S (θ 0 ))
(9)
where Σ H is the covariance of the vectorized Hankel matrix. An efficient computation is described in detail in [11], [12]. Note that for the optimal sensor placement task, all quantities in this section are computed using a model, i.e. ϑ, Op +1 (θ0 ) , S(θ0), J O ,ϑ , J ϑ , µ and J µ ,θ are directly obtained using a model, and Hp +1, q and Σ are obtained using output data that is generated from the model.
3.3 Finding the Optimal Sensor Layouts In order to find the desired sensor layout(s) that maximize our criterion (6) for optimal sensor placement, tr(F) is computed and compared for all the considered sensor layouts, where we assume that the number of sensors is constant. The computation of tr(F) for all these sensor layouts has to be handled with care, otherwise the computational burden becomes infeasible. Combining (7) and (8) yields
(
J = O p +1 (θ 0 )† Hp +1, q ⊗ S (θ 0 )
)
T
J O ,ϑ J ϑ , µ J µ ,θ
(10)
The following procedure for the computation of tr(F) of each sensor layout is suggested: 1.
Compute Hp +1, q , Op +1 (θ0 ) and the product J O ,θ = J O ,ϑ J ϑ , µ J µ ,θ in (10) at all DOFs
2.
1/2 1/2 T Compute Σ1/2 H (such that Σ H = Σ H (Σ H ) ) at all DOFs with an efficient procedure as suggested in [11], [12]
3.
For each sensor layout: a.
4.
Select the rows of Op +1 (θ0 ) , J O ,θ and Σ1/2 H , and the rows and columns of Hp +1, q that correspond to the DOFs
at the sensor positions b. Compute the null space of the observability matrix at the sensor positions c. Compute F using the selected matrices from (5), (9) and (10) d. Store tr(F) for the current layout Compare values of tr(F) for different sensor layouts and select layouts with highest values
The pseudoinverse of the observability matrix is needed in the sensitivity computation of (10). From a numerical point of view this computation becomes infeasible if the observability matrix is badly conditioned. Furthermore, a badly conditioned observability matrix indicates that some modes of the system are close to being unobservable. Thus, sensor configurations with a badly conditioned observability matrix can be dismissed a priori. 3.4 Validation To validate the effectiveness of the sensor placement criterion of the previous sections, we evaluate the average power of the test for a set of damages. While the optimization criterion (6) actually aims at maximizing the average non-centrality parameter, we use the average power of the test as an indicator of the damage detection performance. Of course, a high noncentrality parameter leads to a high power of the test, but while the maximal power of the test is 100%, the non-centrality parameter does not have an upper bound and could falsify the performance evaluation if its values are extremely high for very few damages. Still, there is a high correlation between the non-centrality parameter and the power of the test, which is exploited for finding an optimal sensor placement with the presented strategy. Note that this validation, by obtaining the average power of the test from simulations, is only possible for small sized problems (few DOFs, few sensors) as it is a computationally expensive task. For each sensor layout, the average power of the test can be determined as follows. The empirical distribution of the test variable χ N2 in (4) in the reference state is first obtained from a Monte Carlo simulation where several data sets are simulated using white noise excitation. Like this, a threshold for a given type I error, e.g. 5% (see Fig. 1), can be determined. Then, the empirical distributions of the test variable are determined for all desired damaged states, in which the change in the structural parameter is a constant value. In each of these damaged states, a Monte Carlo simulation of several data sets is performed again and the percentage of the cases in which the test variable χ N2 is above the threshold is determined. This percentage is the power of the test for a specific damage. Finally, the average of the power of the test for all damages is computed, which is a quality criterion of the considered sensor layout for damage detection. This criterion is used for validating the optimal sensor placement.
4. Numerical Example A mass spring chain system with 15 DOF (see Fig. 2) is used for a numerical simulation. All springs have equal stiffness of k = 100 and the mass of the elements is m = [1 2 3 1 3 1 2 1 2 1 2 1 2 1 3]. Classical damping is assigned such that each mode has a damping ratio of 2%. We assume that 3 sensors are available, which leads to 15!/ ((15 − 3)!·3!) = 455 possible sensor layouts. All sensor layouts are numbered consecutively, with the layout #1 as {1,2,3}, then {1,2,4}, …, {1,2,15}, {1,3,4}, {1,3,5}, … until {13,14,15} which has number #455.
Fig. 2 Considered mass-spring chain system.
Using the described model, the parameter ϑ containing the eigenvalues of the corresponding discrete time system (time step 0.1) and the mass-normalized eigenvectors at all DOFs is obtained. From ϑ the matrices in steps 1 and 2 in Section 3.3 are computed at all DOFs. For each of the sensor layouts, the respective entries in these matrices are chosen. 4.1 Performance of the Damage Detection Test To evaluate the real performance of the damage detection test for the different sensor layouts, 200 Monte Carlo simulations of the system were made in the reference state and in each damaged states for each sensor layout, where damages were simulated by decreasing the stiffness in a spring by 10%. All possible damage scenarios were considered, i.e. damage was introduced in each of the springs one at a time. For each simulation, 30,000 data samples were generated from white noise excitation with 5% added output noise. As described in Section 3.4, a threshold was obtained from the χ2 values of the reference state allowing a 5% type I error, and by comparing the χ2 values from all possible damaged states to the threshold the average power of the test for each of the sensor layouts was determined. This validation procedure is a heavy computational burden, as in this case 200 Monte Carlo simulations were made in the reference state as well as in each of the damaged states, amounting to 3600 simulations for each of the 455 sensor layouts. The average power of the test for each sensor layout is shown in Fig. 3. It should be noted that the presented results are subject to statistical variability as they come from simulations themselves. It can be seen that there are some sensor layouts that yield a very poor detection performance, such as layouts #1-#13, where the sensors positions are {1,2,3}, …, {1,2,15} or layouts #86-#92 with sensors at {1,12,13}, …, {1,14,15} and {2,3,4}. On the other side, most sensor layouts yield a good detection performance with the majority having an average power of the test between 60%-80%. Few exceed 80%, e.g. layouts #60-#62 at positions {1,7,12}, {1,7,13} and {1,7,14}, layout #138 at positions {2,7,12}, layout #221 at {3,10,11} or layout #385 at {7,10,11}. As the computation of the test values in Fig. 3 uses the pseudo-inversion of the observability matrix, the results are only meaningful for well-conditioned matrices. The condition number of the observability matrix of each sensor layout is computed and shown in Fig. 4. It can be seen that some sensor layouts with numbers higher than #430 exceed the condition number of 109. The poorly conditioned observability matrix leads to an unstable computation of the sensitivity and the Fisher information, hence these layouts cannot be used for damage detection. Note that these sensor layouts yield a poor power of the test in Fig. 3, coinciding with a high condition number of the observability matrix in Fig. 4. The corresponding sensor positions are {9,12,13}, …, {13,14,15}.
Fig. 3 Average power of the test for all sensor layouts considering damage in all springs (one at a time), using Monte Carlo simulations.
Fig. 4 Condition number of observability matrix in modal basis containing mass-normalized mode shapes for all sensor layouts.
4.2 Optimal Sensor Placement with Fisher Information for all Possible Damages With the procedure described in Section 3.3 the Fisher information estimate F using 100,000 generated data samples is computed for all sensor placements and its trace is shown in Fig. 5. It can be seen that the high values of the trace for the
layout numbers #430 and above coincide with a high condition number of the observability matrix of 109 and above as shown in Fig. 4. These high values are due to numerical errors and are therefore discarded. Also, in Fig. 5 it can be seen that the lowest values of tr(F) correspond to sensor layouts with poor damage detection performance, such as layouts #1-#13, #92 and all the other layouts that can be seen as clear minima in Fig. 3. However, there is no clear link between the layouts with the highest trace and the best detection performance. This may be in part due to statistical variability in the data, but mostly due to the fact that the optimal sensor placement criterion using tr(F) is not exactly the same that is used for the validation: While in Fig. 3 the average power of the test is shown, the trace of the Fisher matrix is shown in Fig. 5, which is a surrogate for the “average” non-centrality parameter (see Equation (6)). Also, the average power of the test is only obtained from few parameter changes of constant norm δθ , namely for a unit change in each of the stiffness parameters one by one, while the integration in (6) is done for all parameter changes of constant norm
δθ leading to tr(F). It may also be possible that the maximization process for all possible parameter changes is not wellconditioned enough for only 3 sensors – good sensor layouts for some damages may always be too poor for other damages and vice versa.
Fig. 5 Trace of the Fisher information for all sensor layouts considering all stiffness parameters equally.
4.3 Optimal Sensor Placement with Fisher Information for Subsets of Damages As pointed out in Section 3.1, a weighting of the structural parameters can be done easily in order to give more importance to changes in some of the parameters. Then, maximizing tr(F) corresponds to finding the sensor placements where the damage detection algorithm is more sensitive to changes in parameters with a high weighting than in parameters with a low weighting. Like this, hotspots are monitored more precisely while the parameters with a low weighting are still taken into account. We consider two cases, first the detection of damages in springs K1-K5, and second the detection of damages in springs K11K15 (cf. Fig. 2). These parameters are multiplied by factor 10 in the computation of the Fisher information. The trace of the Fisher information is then an indicator of the damage detection performance in the mentioned springs, which is to be compared to the average power of the test for the damages in these springs. In Figures 6 and 7 both the average power of test
(blue line) and tr(F) (red line with dots) are shown for both cases. It can be seen that there is a strong correlation between the average power of the test and tr(F), in contrast to taking all parameters equally into account in the previous section. All layouts with a low power of the test have also a low Fisher information, and layouts with a high Fisher information point in the direction of a high power of the test, although the correlation is not perfect. Still, in Fig. 7 the layouts with the 14 highest values of tr(F) have all 100% or nearly 100% average power of the test. From both examples it can be seen that choosing a peak of tr(F) for an optimal sensor placements yields a reasonably good power of the test.
Fig. 6 Power of the test (blue line) and trace of the Fisher information (red line with dots) for all sensor layouts for the detection of damages in springs K1-K5.
Fig. 7 Power of the test (blue line) and trace of the Fisher information (red line with dots) for all sensor layouts for the detection of damages in springs K11-K15.
5. Conclusions In this paper the optimal sensor placement for subspace based damage detection has been investigated. The average power of the test for a set of damages was selected as the criterion indicating the quality of a placement. Based on Monte-Carlo simulations it was found that there are few sensor placements with a poor performance in a mass-spring chain example, while most of them gave satisfactory results. It was also found that some placements led to a large condition number of the observability matrix, making the computation of the damage detection test infeasible in this case, and it was shown that these placements can be discarded. An approach for optimal sensor placement based on the framework in [13] was derived, which is aimed at maximizing the non-centrality parameter of the test by maximizing the trace of the Fisher information matrix of the structural parameters contained in the damage detection residual. The intrinsic statistical properties of the damage detection test are taken into account when obtaining the optimal placement. Using a model of the considered structure and output data generated with this model, the optimal placement is computed for a fixed number of sensors and desired structural parameters, whose changes one wants to detect with the damage detection routine. By giving a weighting to these parameters, the user can decide the importance of the parameters, e.g. in order to monitor hotspots more precisely. The presented method was applied on a 15 DOF model using 3 sensors. While the method was shown to rule out poor placements, finding the optimal placement proved to be difficult if optimization was done for finding changes in all structural parameters. This may be an indication that the number of sensors used in the optimization was not sufficient. However, a strong correlation between the optimization criterion (the trace of the Fisher information) and the actual performance of the damage detection test was visible when obtaining optimal placements for finding damages in a more restricted area of the structure. Future work contains a deeper investigation of the statistical properties of the optimization criterion and the comparison of sensor layouts with different numbers of sensors.
Acknowledgements The support from the NSF under the Hazard Mitigation and Structural Engineering Program Grant 1000391 is gratefully acknowledged.
References [1] Kammer DC (1991), Sensor placement for on-orbit modal identification and correlation of large space structures, Journal of Guidance, Control, and Dynamics 14(2), 251-259. [2] Heo G, Wang ML, Satpathi D (1997), Optimal transducer placement for health monitoring of long span bridge, Soil Dynamics and Earthquake Engineering 16(7-8), 495-502. [3] Papadimitriou C (2004), Optimal sensor placement methodology for parametric identification of structural systems, Journal of Sound and Vibration 278(4-5), 923-947. [4] Gawronski W, Lim KB (1996), Balanced actuator and sensor placement for flexible structures, International Journal of Control 65(1), 131-145. [5] van de Wal M, de Jager B (2001), A review of methods for input/output selection. Automatica 37(4), 487-510. [6] Meo M and Zumpano G (2005), On the optimal sensor placement techniques for a bridge structure, Engineering Structures 27(10), 1488-1497. [7] Marano GC, Monti G, Quaranta G (2011), Comparison of different optimum criteria for sensor placement in lattice towers, The Structural Design of Tall and Special Buildings 20(8), 1048-1056. [8] Debnath N, Dutta A, Deb SK (2012), Placement of sensors in operational modal analysis for truss bridges, Mechanical Systems and Signal Processing 31, 196-216. [9] Basseville M, Abdelghani M, Benveniste A (2000), Subspace-based fault detection algorithms for vibration monitoring, Automatica 36(1), 101-109.
[10] Basseville M, Mevel L, Goursat M (2004), Statistical model-based damage detection and localization: subspace-based residuals and damage-to-noise sensitivity ratios, Journal of Sound and Vibration 275(3), 769-794. [11] Döhler M, Mevel L (2011), Robust subspace based fault detection. Proc. 18th IFAC World Congress, Milan, Italy. [12] Döhler M, Mevel L (2012), Subspace-based damage detection under changes in the ambient excitation statistics. Submitted to Mechanical Systems and Signal Processing. [13] Basseville M, Benveniste A, Moustakides G, Rougée A (1987), Optimal sensor location for detecting changes in dynamical behavior, IEEE Transactions on Automatic Control 32(12), 1067-1075. [14] Balmès E, Basseville M, Mevel L, Nasser H, Zhou W (2008), Statistical model-based damage localization: a combined subspace-based and substructuring approach, Structural Control and Health Monitoring 15(6), 857-875. [15] Bernal D (2012), Sensitivities of eigenvalues and eigenvectors from complex perturbations, In: Proc. 30th International Modal Analysis Conference, Jacksonville, FL, USA.