Optical Motion Tracking in Earthquake-Simulation Shake Table Testing: Preliminary Results Paul Rodr´ıguez

Abstract— Sensors such accelerometers and displacement transducers are generally used in earthquake-simulation shake table testing to measure the induced motions. In particular the Anti-seismic Structure Laboratory at the Pontifical Catholic University of Peru (PUCP) uses LVDT (linear variable differential transformer) sensors, which can achieve accurate measurements. However there are limitations in the number of measuring points; moreover, the required instrumentation is demanding and destructive tests can not be measured with such devices. We present the preliminary results of an optical motion tracking system to measure the induced motions for shake table testing at the PUCP’s Anti-seismic Structure Laboratory.

I. I NTRODUCTION HAKE table tests are used to assess how a model or a full-size building responds to the vibrations of a simulated earthquake. The accuracy of the induced motion’s measurements are paramount to appraise the behavior and the structural health of the construction during the tests, and their analysis may also be used to propose design enhancements. Sensors such as accelerometers and displacement transducers are commonly used in Shake Table experiments to measure the earthquake induced motions. Nevertheless, there are limitations in the number of measuring points; moreover, the required instrumentation is demanding and destructive tests can not be measured with such devices. Other methodologies of motion tracking include magnetic, acoustic, and optical techniques. Particularly, the PUCP’s Anti-seismic Structure Laboratory uses LVDT (linear variable differential transformer, a type of displacement transducers) sensors to measure the earthquake induced motions. The tests are carried on a Shake Table (4.40 × 4.40 m.) with one degree of freedom. The maximum displacement and acceleration of the Shake Table are 130 mm. for each sense and 1G respectively. Usually each test lasts about 30 seconds. There are two types of tests carried out in the PUCP’s Anti-seismic Structure Laboratory: “resistance” and “wall-bend” tests. The “resistance” test is carried out to assess the structural health of the construction after the simulated earthquake whereas the “wall-bend” test studies the bending properties of a given wall. In both cases the LVDT sensors measure the construction’s induced motion only in the direction and sense of the Shake Table’s drive movement. Even though the LVDT sensors have good accuracy (in the order of millimeters), they must be physically attached in the

S

Paul Rodr´ıguez is with Digital Signal Processing Group at the Pontificia Universidad Cat´olica del Per´u, Lima, Peru. Email: [email protected], Tel: +51 1 9 9339 5427

structure and require cumbersome cabling and configurations and substantial time for setup (up to two days). In the context of this paper, tracking is the problem of generating an inference about the motion of an object given a sequence of images. In particular, the object (or objects) for an optical motion tracking system (i) may be particular features of the scene (image-based systems) or (ii) may consist of artificial markers introduced for such purpose (marker-based systems). Both types of optical motion tracking systems have been successfully employed in Earthquake engineering. Imagebased systems are reported in [1], [2], [3]; the main drawback of such systems is that they need robust feature detection techniques. For marker-based systems, active and retro-reflective markers are quite popular [4], [5], [6] because the segmentation procedure for such markers is relatively straightforward (based on intensity). For systems where active markers are preferred (see [4]) they may need high-speed cameras in order to estimate accurate measurements. For retro-reflective markers based systems [5], [6] some type of artificial illumination is needed. Moreover, even though some type of intensity-adapted segmentation may be used (see [7] for instance) this type of systems are not fully robust to changing levels of illumination. In this work we adopt a marker-based optical motion tracking system to measure the (simulated) earthquake induced motions, where one of our main contributions is the use of AM-FM (amplitude modulated, frequency modulated) opaque markers. The segmentation process for such markers is robust to changing levels of illumination. They also embed spatial resolution information used to simplify the measurement of the induced motions. Furthermore, since our markers are opaque (i.e.: do not emit light nor are retro-reflective) and given the modest dinamics of the PUCP’s Shake Table, the required video’s recording speed for our system is within nowadays standard bounds (30 to 60 fps). In Figure 1 we show our system configuration for the two types of tests carried out in the PUCP’s Anti-seismic Structure Laboratory where several high resolution digital cameras1 are used to record the displacement of the markers placed onto the studied structure’s surface. This paper is organized as follows: in Section II we describe the characteristics of our AM-FM passive markers and its robust segmentation procedure based on the AM-FM demodulation [9], [10, Ch. 4.4]. In Section III we show our preliminary experimental results. Finally in IV we give our concluding 1 Sanyo XACTI HD1010, 1920×1080 pixels @ 29.97 fps, 1280×720 @ 59.94 fps. The XACTI HD1010 digital cameras were chosen for this project because they are a good compromise between video recording capabilities (see [8]) and economical constrains.

an (ξ) embed the energy (intensity in this context) of an image’s region, whereas the Frequency-Modulated components cos(ϕn (ξ)) capture fast-changing spatial variability in image intensity.

(a) Configuration for a “resistance” test.

(b) Configuration for a “wall-bend” test. Fig. 1. System configuration for the two types of tests carried out in the PUCP’s Anti-seismic Structure Laboratory.

(a) AM-FM opaque markers. The 2-D sinusoid pattern may be in the horizontal or vertical direction. Circles are 10 cm. apart from each other (vertical and horizontal distances).

(b) Scene with vertical AMFM markers placed onto a small structure. Image size is 1920 × 1080 pixels and it is a frame from a video recorded at 29.97 fps with a Sanyo XACTI HD1010 digital camera [8]

(c) 2-D dataset ω (1) (ξ) display as image. Note that ω (1) (ξ) is the dominant Instantaneous frequency (IF) of 2(b) in the vertical direction (see (2)).

(d) 2-D dataset ω (2) (ξ) display as image. Note that ω (2) (ξ) is the dominant Instantaneous frequency (IF) of 2(b) in the horizontal direction (see (2)).

remarks. II. T RACKING AM-FM M ARKERS The AM-FM (amplitude modulated, frequency modulated) modulation image representation has been successfully employed to segment images in a variety of scenarios [11], [12], [13], where the AM-FM structures were intrinsic to the analyzed image. In contrast, we impose artificial AM-FM structures in the form of opaque markers in the scene; for the scope of this paper, the scene is a given structure placed onto a shake table (see Figures 2(a) and 2(b)). In this section we provide a brief description of the AM-FM demodulation and its associated dominant component analysis (DCA) [9], [10, Ch. 4.4]. Then we follow with the description of the AM-FM marker and the segmentation procedure based on the estimated AM-FM parameters extracted from the scene. A. AM-FM Demodulation The AM-FM representation of images allows us to model non-stationary image content in terms of amplitude and phase functions using I(ξ)

=

M X

an (ξ) cos(ϕn (ξ))

(1)

n=1

where I(ξ) : R2 → R is the input image, ξ = (ξ1 , ξ2 ) ∈ R2 , M ∈ N, an : R2 → [0, ∞) and ϕn : R2 → R. The interpretation of (1) suggests that the M AM-FM component images an (ξ) · cos(ϕn (ξ)) are used to model the essential image modulation structure. The amplitude functions

Fig. 2. AM-FM markers are shown in (a). (b) depicts a typical video frame. In (c) and (d) we display the 2-D datasets ω (1) (ξ) and ω (2) (ξ) as images. ω (1) (ξ) and ω (2) (ξ) are the dominant Instantaneous frequency (IF) of 2(b) in the vertical and horizontal direction respectively (see (2)). Note the high contrast of (c), where the vertical AM-FM markers are placed.

Given a real image I(ξ), we need to compute the AMFM parameters. We use the term AM-FM demodulation to imply the computation of the instantaneous amplitude (IA) functions an (ξ), instantaneous phase (IP) functions ϕn (ξ), and the instantaneous frequency (IF) vector =   functions ωn (ξ)  (1) (2) ∂ ∂ ∇ϕn (ξ) = ∂ξ1 ϕn (ξ), ∂ξ2 ϕn (ξ) = ωn (ξ), ωn (ξ) . For the scope of this paper, the IA, IP and IF computations

are carried out via the robust AM-FM demodulation algorithm proposed in [14] (see also [9] and [15] for further details). The AM-FM dominant component analysis (DCA), described in [9], consist on applying a collection of band-pass filters (filter bank) to the original image, and then proceed with the AM-FM demodulation of each band-pass filtered image and select the estimates from the channel with the maximum amplitude estimate a(ξ) = max {an (ξ)}. Hence,

not be true. The second approach needs one training frame to be manually segmented but (in the practice) gives better segmentation results. The latter approach is used in the present report. Once the ROIs (markers) are segmented we proceed to estimate the centroid of each black circle (6 per marker); since at this point we know the properties of the object we are dealing with, it results in a straightforward procedure.

n∈[1, M ]

the algorithm adaptively selects the bandpass filter with the maximum response, modeling the input image as I(ξ)

= a(ξ) cos(ϕ(ξ))

(2)

where κ(ξ) = argmax{an (ξ)}, ϕ(ξ) = ϕκ(ξ) (ξ), ω(ξ) = n∈[1, M ]  (l) (1) (2) ω (ξ), ω (ξ) and ω (l) (ξ) = ωκ(ξ) (ξ) for l = 1, 2. This approach does not assume spatial continuity, and allows the model to quickly adapt to singularities in the image. B. AM-FM Markers: description, segmentation and tracking The design of the AM-FM opaque markers (see Figure 2(a)) includes 6 black circles (radius 0.5 cm.) 10 cm. apart from each other (vertical and horizontal distances) and a 2-D sinusoid pattern (in the horizontal or vertical direction) such its wavelength is 1cm approximately. The markers are printed in a A4 canson paper using a standard laser printer and are placed onto the structure (to be analyzed) using a strong glue and/or small upholstery nails; in Figure 2(b) six markers are place onto a small structure. Given a video frame (e.g. Figure 2(b)) with the AMFM markers place onto a structure, we proceed to compute the DCA AM-FM demodulation parameters following (2); in particular, we are interested in the dominant instantaneous frequency (IF) vector function with vertical and horizontal components given by ω (1) (ξ) and ω (2) (ξ) respectively. In Figures 2(c) and 2(d) we display the 2-D datasets ω (1) (ξ) and ω (2) (ξ) as images. Clearly, the high contrast areas of the image shown in Figure 2(c) are related to the 2-D sinusoid pattern in the vertical direction (see Figure 2(b)). Moreover, the resulting histogram for the 2-D datasets ω (1) (ξ) (or ω (2) (ξ) when appropriate) will be multimodal, where the highest frequency mode (by design) should correspond to the 2-D sinusoid pattern; in Figure 3 we show the histogram of the instantaneous frequency of 2(b) in the horizontal direction (note that this is a very typical histogram). Using highest frequency mode assumption we may follow two approaches to first segment each marker as an entity containing the black circles: (i) compute the histogram of ω (1) (ξ) (or ω (2) (ξ) when appropriate), identify the lower and upper thresholds related to the highest frequency mode (for instance, via [7]) and use such thresholds for segmentation or (ii) manually segment the location of each marker for a given “training” frame, compute the histogram of ω (1) (ξ) (or ω (2) (ξ) when appropriate) in the manually segmented regions of interest (ROI), identify the lower and upper thresholds (again via [7]) and use such thresholds to segment all other frames. The first approach is fully automatic but, even though unlikely in a real scenario, the highest frequency mode assumption may

QSHI GSVVIWTSRHMRK XS XLI %1 *1 QEVOIVW

0

0.01

0.02

0.03

0.04

0.045

Fig. 3. Histogram of the instantaneous frequency of 2(b) in the horizontal direction (note that this is a very typical histogram). The mode corresponding to the AM-FM markers is highlighted. The horizontal axis represents normalized frequency.

Since the markers embed spatial information and the displacement characteristics of the Shake Table at the PUCP’s Anti-seismic lab. (maximum displacement and acceleration of 130 mm. for each sense and 1G respectively) are modest it is possible to estimate the black circles (of each marker) displacement (and therefore the structure displacement) without calibrating the cameras’ intrinsic/extrinsic parameters (previous to the test, see [16], [17]). Nevertheless we plan in the short term to include such calibration. Furthermore, it is needed for “bend-wall” tests, where the movement is perpendicular to the cameras’ point of view; once the cameras’ intrinsic/extrinsic parameters are estimated it will be possible to estimate the displacement of the individual black circles (and therefore, the wall bending properties) in the direction and sense of the the Shake Table’s drive movement. III. P RELIMINARY E XPERIMENTAL R ESULTS The preliminary experimental results presented below come from a collection of destructive tests carried out as part of a last year course (“Anti-seismic Engineering” from the Civil Engineering Department) at the PUCP’s Anti-seismic Structure Lab. Four small structures (with different mechanical properties) are placed and secured onto the Shake Table; in Figure 4 we depict the structures’ organization. Due to the nature of the tests (destructive) no LVDT sensors are used. The deployment of the cameras (Sanyo XACTI HD1010 digital camera [8]) is illustrated in Figure 4; note that from cameras 1 and 2 point of view the movement of the two frontal structures will be “right” and “left”. The videos were recorded at 29.97 fps and 1080p (frames of 1920 × 1080 pixels) in H.264/MPEG-4 AVC format. A simple Matlab MEX interface (available at [18]) based on the FFmpeg library ([19]) was used to directly decode the videos; a Matlab program was written

distortion in the video’s frames and to allow us to estimate the displacement in “bend-wall” tests. Currently, the proposed systems analyzes the acquired videos in an off-line fashion, taking about 1 hour to estimate the induced movements (from one video sequence). In the mid-term, we foresee an on-line system, where the Matlab scripts (currently used) are replaced by a C library; we also plan to acquire the uncompressed video directly via an HDMI frame-grabber [20]. V. ACKNOWLEDGMENT Fig. 4. Structures’ organization and cameras’ deployment for destructive tests carried out as part of a last year course (“Anti-seismic Engineering” from the Civil Engineering Department) at the PUCP’s Anti-seismic Structure Laboratory.

to estimate the black circle’s centroids of each marker using the procedure described in Section II-B. On what follows we focus in the video acquired by camera 2 (see Figure 4), for which the estimated spatial resolution was 1.008 mm/pixel. The structure in front of camera 2 had six markers placed onto its surface, and for the sake of clarity, we choose to present the estimated centroid’s evolution for the top-most/left-most and bottom-most/right-most black circles (related to the top-most/left-most and bottom-most/right-most markers respectively, see Figure 6). In Figures 5(a) and 5(b) we present the horizontal and vertical displacement of the top-most/left-most (in blue) and bottom-most/right-most (in green) black circles (see also Figures 6(a) through 6(f)). We must note that for Figure 5(a) “down” is to be understood as “left” and “up” as “right” in the cameras’ perspective. From Figure 5(b) it should be apparent that the structure starts loosing its original geometric properties (starts to break down) at about the 11th second, and from the 15th second onwards, the structure is severely damage. The T3 and T4 marks in 5(b) (and 5(d)) highlight two particularly extreme instances. Additionally, between instants T3 through T6 the rocking movement of the top part of the structure is clearly identifiable (see Figures 5(d) and 6(c)-6(f)). It may also be deduced that the base of the structure suffered a structural damage: the bottom-most/right-most black circle ends up in a lower horizontal position (compare the green line values in Figure 5(b) at seconds 0 and 35). IV. C ONCLUSIONS The use of AM-FM markers for the proposed optical motion tracking system provide a robust scheme to segment the regions of interest (markers) and consistently track the induced movements. The system has an accuracy comparable to the LVDT sensors’ accuracy (in the order of millimeters). Moreover, it dramatically decreases the instrumentation time: from up to two days down to less than 30 minutes. In the short term we intend to collect full statistics and compare the LVDT measurements and those estimated by the proposed system. We also plan to include the estimation of the cameras’ intrinsic/extrinsic parameters to correct the

The author thanks the diligent cooperation of the members of Anti-seismic Structure Laboratory at the PUCP. R EFERENCES [1] Gongkang Fu and Adil G. Moosa, “An optical approach to structural displacement measurement and its application,” Journal of Engineering Mechanics, vol. 128, no. 5, pp. 511–520, 2002. [2] D. Nastase, S. Chaudhuri, R. Chadwick, T. Hutchinson, K. Doerr, and F. Kuester, “Development and evaluation of a seismic monitoring system for building interiors - part I: Experiment design and results,” IEEE Trans. on Instrumentation and Measurement, vol. 57, pp. 332–344, 2008. [3] D. Nastase, S. Chaudhuri, R. Chadwick, T. Hutchinson, K. Doerr, and F. Kuester, “Development and evaluation of a seismic monitoring system for building interiors - part II: Image data analysis and results,” IEEE Trans. on Instrumentation and Measurement, vol. 57, pp. 345–354, 2008. [4] Satoshi Fujita, Osamu Furuya, and Tadashi Mikoshiba, “Research and development of measurement method for structural fracturing process in shake table tests using image processing technique,” Journal of pressure vessel technology, vol. 126, no. 1, pp. 115–121, 2004. [5] T. Hutchinson, S. Ray Chaudhuri, and F. Kuesterand S. Auduong, “Light-based motion tracking of equipment subjected to earthquake motions,” J. Comp. in Civil Engineering, vol. 19, pp. 292–303, 2005. [6] K. Kanda, Y. Miyamoto, A. Kondo, and M.Oshio, “Monitoring of earthquake induced motions and damage with optical motion tracking,” Smart Materials and Structures, vol. 14, pp. 32–38, 2005. [7] Jeng-Horng Chang, Kuo-Chin Fan, and Yang-Lang Chang, “Multimodal gray-level histogram modeling and decomposition,” Image Vision Comput., vol. 20, no. 3, pp. 203–216, 2002. [8] Sanyo Corp., “Instruction manual VPC HD 1010,” http://www. sanyo.com/. [9] J. P. Havlicek, AM-FM Image Models, Ph.D. thesis, The University of Texas at Austin, 1996. [10] A. C. Bovik, Handbook of Image and Video Processing, Academic Press, May 2000. [11] M. S. Pattichis, C. S. Pattichis, M. Avraam, A. C. Bovik, and K. Kyriakou, “Am-fm texture segmentation in electron microscopic muscle imaging,” IEEE J. Medical Imaging, vol. 19, no. 12, pp. 1253–1258, december 2000. [12] P. Rodr´ıguez, M. S. Pattichis, and M. B. Goens, “M-mode echocardiography image and video segmentation based on am-fm demodulation techniques,” in Int. Conf. of the IEEE Eng. in Medicine and Biology Society, Cacun, Mexico, September 2003, vol. 2, pp. 1176–1179. [13] C. Christodoulou, C. S. Pattichis, Victor Murray, M. S. Pattichis, and A. Nicolaides, “Am-fm representations for the characterization of carotid plaque ultrasound images,” in 4th European Conference of the International Federation for Medical and Biological Engineering. 2008, pp. 546–459, Springer Berlin Heidelberg. [14] Paul Rodr´ıguez, Fast and Accurate AM-FM Demodulation of Digital Images With Applications, Ph.D. thesis, University of New Mexico (UNM), Albquerque, NM, USA, 2005. [15] G. Girolami and D. Vakman, “Instantaneous frequency estimation and measurement: a quasi-local method,” Measurement Science Technology, vol. 13, pp. 909–917, june 2002. [16] Zhengyou Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, pp. 1330–1334, 2000. [17] P. F. Sturm and S. J. Maybank, “On plane-based camera calibration: a general algorithm, singularities, applications,” 1999, vol. 1. [18] P. Rodr´ıguez, “Ffmpeg video for mex (FFV4MEX),” http://sites. google.com/a/istec.net/prodrig/.

[19] “FFmpeg,” http://ffmpeg.org/. [20] Black Magic, “Intensity Pro,” http://www. blackmagic-design.com/products/intensity/. 100

T2

mm.

50 0 T1 −50

T4

T5 T6

25

30

T3

−100 0

5

10

15

20

35

Time

(a) Horizontal displacement of the top-most/left-most (in blue) and bottommost/right-most (in green) black circles, where “down” is to be understood as “left” and “up” as “right” in the cameras’ perspective (see also Figures 6(a) through 6(f)). 40

T4 T3

30 mm.

T5 20 T6 10 T1

T2

0 0

5

10

15

20

25

30

(a) Frame at instant T1 (see Figures 5(a) and 5(b)).

(b) Frame at instant T2 (see Figures 5(a) and 5(b)).

(c) Frame at instant T3 (see Figures 5(a) and 5(b)).

(d) Frame at instant T4 (see Figures 5(a) and 5(b)).

(e) Frame at instant T5 (see Figures 5(a) and 5(b)).

(f) Frame at instant T6 (see Figures 5(a) and 5(b)).

35

Time

(b) Vertical displacement of the top-most/left-most (in blue) and bottommost/right-most (in green) black circles, where “up” and “down” are in accordance to the cameras’ perspective (see also Figures 6(a) through 6(f)). 100

mm.

50 0 T4

−50 T3 −100 21 22

23

24

T6

T5

25

26

27

28

29

30

Time

(c) Zoom of the horizontal displacement (a) between seconds 21 through 30. 40 30

T4 T3

mm.

T5 20 T6 10 0 21

22

23

24

25

26

27

28

29

30

Time

(d) Zoom of the vertical displacement (b) between seconds 21 through 30. The rocking movement of the top part (in blue) of the structure is clearly identifiable. See also 6(c)-6(f)). Fig. 5. Horizontal ((a) and (c)) and vertical ((b) and (d)) displacement of the top-most/left-most (in blue) and bottom-most/right-most (in green) black circles. See also Figures 6(a) through 6(f).

Fig. 6. Frames from camera 2 (see Figure 4) at instants T1-T6 (see Figures 5(a) and 5(b)) are shown. Original frames have been cropped to focus the reader on the structure.

Optical Motion Tracking in Earthquake-Simulation ...

analysis may also be used to propose design enhancements. Sensors such as .... an(ξ) embed the energy (intensity in this context) of an im- age's region ..... for building interiors - part II: Image data analysis and results,” IEEE. Trans. on ...

1MB Sizes 3 Downloads 195 Views

Recommend Documents

Integration of Magnetic and Optical Motion Tracking ...
For dynamic biomechanical analyses or development of human motion simulation models, it is important to establish an empirical motion database derived from ... estimation and angle calculation methods using the Transom Jack software.

Kinect in Motion - Audio and Visual Tracking by Example ...
Kinect in Motion - Audio and Visual Tracking by Example - , Fascinari Massimo.pdf. Kinect in Motion - Audio and Visual Tracking by Example - , Fascinari Massimo.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Kinect in Motion - Audio an

Speckle Tracking in 3D Echocardiography with Motion ... - IEEE Xplore
tracking in 3D echocardiography. Instead of tracking each speckle independently, we enforce a motion coherence con- straint, in conjunction with a dynamic ...

Motion Tracking and Interpretation in Intelligent ...
Future: evaluation in a real-time system. 3. Boosted Learning and Voting. •. Gentle AdaBoost based offline learning (one vs. others). •. Classification and regression trees as weak learners. •. Voting between clusters of samples. •. Algorithm

Optical character recognition for vehicle tracking system
Abstract. This paper 'Optical Character Recognition for vehicle tracking System' is an offline recognition system developed to identify either printed characters or discrete run-on handwritten characters. It is a part of pattern recognition that usua

LV Motion Tracking from 3D Echocardiography Using ... - Springer Link
3D echocardiography provides an attractive alternative to MRI and CT be- ..... We implement the algorithm in Matlab, and test it on a Pentium4 CPU 3GHz.

Optical character recognition for vehicle tracking system
This paper 'Optical Character Recognition for vehicle tracking System' is an offline recognition system developed to identify either printed characters or discrete run-on handwritten ... where clear imaging is available such as scanning of printed do

Optical character recognition for vehicle tracking system
This paper 'Optical Character Recognition for vehicle tracking System' is an offline ... Image is a two-dimensional function f(x,y), where x and y are spatial ... one must perform the setup required by one's particular image acquisition device. ....

Tai-Chi Chuan Motion Tracking by Integrating Inertial ...
The accuracy of the proposed method is verified using a mocap system. Zhu and Zhou [15] propose a real time tracking method using sensor modules to track human motion. Each sensor module consists of a tri-axis accelerometer, a tri-axis rate-gyro and

Robust Tracking with Motion Estimation and Local ...
Jul 19, 2006 - The proposed tracker outperforms the traditional MS tracker as ... (4) Learning-based approaches use pattern recognition algorithms to learn the ...... tracking, IEEE Trans. on Pattern Analysis Machine Intelligence 27 (8) (2005).

Motion-Based Multiple Object Tracking MATLAB & Simulink Example.pdf
Motion-Based Multiple Object Tracking MATLAB & Simulink Example.pdf. Motion-Based Multiple Object Tracking MATLAB & Simulink Example.pdf. Open.

Robust Tracking with Motion Estimation and Local ... - Semantic Scholar
Jul 19, 2006 - Visual tracking has been a challenging problem in computer vision over the decades. The applications ... This work was supported by an ERCIM post-doctoral fellowship at. IRISA/INRIA ...... 6 (4) (1995) 348–365. [31] G. Hager ...

Human Motion Detection and Tracking for Video ...
Gradients combined with ADABOOST learning to search for humans in an image .... Frame (b) Background Subtracted Frame with motion region detected (c). Extraction of image ... E. Light Support Vector Machine – Training and Testing. The classificatio

approaching geodesic motion on ground Optical ...
2004. [8] R. Stanga et al, “Double Degree of Freedom pendulum facility for the study of weak forces”, in JPCS, vol 154, 012032, 2009. The Roto -Translational Pendulum: approaching geodesic motion on ground. Our two DoF facility will better repres

Tai-Chi Chuan Motion Tracking by Integrating Inertial ...
aided Tai-Chi learning system is demanding a less- restrictive ... sensors, cameras, magnetics and ultrasound systems ... location information for posture tracking. ..... 1+bi,z bi,x. 0. −bi,xbi,y. 1+bi,z. 1 − b2 i,y. 1+bi,z bi,y. 0. −bi,x. −

Head Motion Tracking and Pose Estimation in the Six Degrees of ...
Figure 2.3: Illustration of the training stage of a neural network. ... Figure 2.4: Tracking pose change measurements between video frames to estimate ...... AdaBoost is a machine learning boosting algorithm capable of constructing a strong.

now hiring! - Abilities In Motion
wholesale supply company in the U.S. and the industry leader in supply chain ... Apply today by visiting www.weselectthebest.com/locations/robesonia-pa.

now hiring! - Abilities in Motion
We offer an attractive starting pay rate, with aggressive pay increases every six months from hire. ➢ Starting rate of $14.18/hour. ➢ Increases every 6 months.

Neurodegenerative disease: Tracking disease progress in ... - Nature
Mar 15, 2011 - Johnston, S. C. et al. National Stroke. Association recommendations for systems of care for TIA. Ann. Neurol. doi:10.1002/ ana.22332. 2. Fussman, C., Rafferty, A. P., Lyon-Callo, S.,. Morgenstern, L. B. & Reeves, M. J. Lack of associat

Compensating for chromatic dispersion in optical fibers
Mar 28, 2011 - See application ?le for complete search history. (56). References Cited .... original patent but forms no part of this reissue speci?ca tion; matter ...

Compensating for chromatic dispersion in optical fibers
Mar 28, 2011 - optical ?ber (30). The collimating means (61) converts the spatially diverging beam into a mainly collimated beam that is emitted therefrom.

Event Intensity Tracking in Weblog Collections - icwsm
media such as weblogs, more formal publication streams such as ... topic of interest, may be observed in almost all varieties of texts and ..... 39% – do not list a category. Among the ..... Weblogs and Social Media (ICWSM 2009), San Jose, CA,.

IN TRACKING DOMESTIC WATER CUSTOMERS.pdf
APPLICATION OF GEOGRAPHICAL INFORMATION S ... IN TRACKING DOMESTIC WATER CUSTOMERS.pdf. APPLICATION OF GEOGRAPHICAL ...