PIL-EYE: Integrated System for Sustainable Development of Intelligent Visual Surveillance Algorithms 1

Hyung Jin Chang1 , Kwang Moo Yi1 , Shimin Yin1 , Soo Wan Kim1 , Young Min Baek2 , Ho Seok Ahn3 and Jin Young Choi1 Perception and Intelligence Lab., School of EECS, ASRI Seoul National University, Seoul, Korea 2 Mitsuishi Sugita Lab., The University of Tokyo, Tokyo, Japan 3 Department of Applied Robot Technology, Korea Institute of Industrial Technology Email: 1 {changhj,kmyi,smyoon,soowankim,jychoi}@snu.ac.kr 2 [email protected], 3 [email protected]

Abstract—In this paper, we introduce a new platform for integrated development of visual surveillance algorithms, named as PIL-EYE system. In our system, any functional modules and algorithms can be added or removed, not affecting other modules. Also, functional flow can be designed by simply scheduling the order of modules. Algorithm optimization becomes easy by checking computational load in real time. Furthermore, commercialization can be easily achieved by packaging of modules. The effectiveness of the proposed modular architecture is shown through several experiments with the implemented system.

I. I NTRODUCTION Recently, lots of researches on intelligent visual surveillance systems including detection, tracking and behavior analysis algorithms are in progress. Many of the concepts of current visual surveillance systems are prototyped under the Video Surveillance And Monitoring (VSAM) program [1]. There has been a significant number of recent projects on developing surveillance system. Among the earlier automated surveillance systems, Pfinder [2], W4 [3] and UCF Knight [4] are the ones well known. They successfully achieved detection and tracking tasks in various environments. However, the papers mainly focuses on the individual algorithms and there is no comment on how to combine them in systematic view. There are a few papers on the architecture of visual surveillance system. [5], [6], [7] and [8] reviewed many different surveillance systems and their architectures briefly. The systems reviewed vary in camera types (such as IR, thermal, multi camera etc.) and system types (distributed, centralized or network). Because the architectures are originally designed for their specific purposes, generalization is difficult manner. Wijnhoven et. al. [9], Delgado et.al. [10] and Vallejo et.al. [11] presented a flexible architecture for a distributed knowledge-based system. Multi-agent technology and ontologies are used to represent knowledge. The architecture is flexible enough to be applied to different problems, but it is not appropriate for testing and comparing newly developed algorithms. In IBM S3 [12], an open and extensible framework by using a plug and play framework for video analytics has

been developed and generated event meta-data are sent to the database. Saini et. al. [13] presented a flexible surveillance system which is portable, extensible and dynamic. The system has also a modular architecture, but the role of each module is structurally fixed and the modules are closely connected to each other. This property is not appropriate for developing an algorithm of novel pipeline. In this paper, we present an open and flexible architecture for easily integrating multiple independently developed technologies in a common framework, and developing new intelligent visual surveillance algorithms efficiently. We develop a real-time intelligent visual surveillance system named as perception and intelligence lab - enhance your eye (PIL-EYE) system by applying flexible modular system architecture. We set system independent requirements which are necessary for comfortable and successful development of intelligent visual surveillance algorithms for actual projects. The following is a list of the requirements: • Addition and removal of new algorithm should be easy. • Tuning the algorithm for a given video stream should be simple, and parameter change should affect algorithms immediately. • Computational time and memory consumption can be monitored in real time. • Algorithm module can be developed in any programming language. • System operates independent to video input devices. II. T HE PIL-EYE A RCHITECTURE The PIL-EYE system is composed of many modules consisted of many algorithms. As in Fig. 1, the main manager exists in the core which “manages” the whole system flow and resources. Each task module (e.g. detection module, tracking module, etc.), is connected to the main manager and instantiated through the task engine. Under each task engine, a wrapper-like algorithm engine exists for each algorithm which is actually run to achieve surveillance tasks. To keep the system architecture from being harmed during development, we defined the following three principles. These

Fig. 1.

PIL-EYE system architecture and hierarchical structure.

rules are key factors to make the system stable and flexible. 1) The system consists of instances of each module. 2) All communications are hold within child-parent relationship. 3) No communication between children nor between parents. As in Fig. 1, since the main manager communicates only with task engines, it is algorithm independent. Also, algorithm engine acts as a wrapper for each individual algorithm, making the task engine algorithm independent as well. At the same time, the algorithm developers do not need to care about the platform or OS, since the algorithm engine wraps the algorithm to fit the platform.

,PDJH

0DLQ 0DQDJHU

$FWXDWLRQ

,PDJH &KHFN )LOWHULQJ$XWR 2Q2II &DPHUD &RYHULQJ $XWR 'HWHFWLRQ

3UHSURFHVVLQJ

'HWHFWLRQ

2PQL 7UDFNLQJ

0XOWL 7UDFNLQJ

5HPRYLQJ 5DLQ6QRZ

*00

0HDQ6KLIW

.'(

3DUWLFOH ILOWHULQJ

6WRFKDVWLF 6DPSOLQJ 6HDUFK

5HPRYLQJ %XJV &/$+( 6WDELOL]LQJ

'/%

Fig. 2.

*8,

,OOXPLQDWLRQ &KDQJH

+<(1$

'\QDPLF 6FHQH 'HWHFWLRQ

26,07

0RYLQJ &DPHUD 'HWHFWLRQ

0266(

(QKDQFHG 0DWFKLQJ 7DEOH

%HKDYLRU $EDQGRQHG /XJJDJH 7UDMHFWRU\ $QDO\VLV 8VHU'HILQHG 7UDMHFWRU\

0606

7)'

Implemented algorithms in the PIL-EYE system.

A. Main Manager As in Fig. 1, the main manager is the core of the system. Every task module, instantiated by a task engine, is connected to the main manager. To follow the three principles defined before, the main manager maintains a data pool which holds pointers to the results of each modules inside. Each module collects the necessary data from this data pool and runs. Therefore, each task module becomes independent from one another. The main manager also governs the flow of the system, determining which module should be run after one another. Thus, to change the system flow, only the scheduling part of the main manager needs to be fixed.

(a)

(b)

Fig. 3. Two kinds of PIL-EYE GUI (a) GUI for development. (b) GUI for test in a laptop.

interface makes the system flexible, meaning that task engines can be freely pugged in and out of the main manager.

B. Task Engine A task module exists for each task (such as tracking and detection), and each task module is instantiated through a task engine. A task engine manages the necessary resources it needs to run each algorithm engine and their results. The data kept at the main manager data pool are pointers to these actual data in each task engines. A task engine also determines which algorithm engine should be run, acting as a manager for each task. Each task engine may have different I/O depending on the type of task it is targeted for. However, every task engine has the same interface; Init, Run, UnInit. This standardized

C. Algorithm Engine The algorithm engine acts as a wrapper for each algorithm (such as mean shift tracking [14] and particle filter tracking [15]), standardizing the I/O and interface of each algorithm. Therefore, each algorithm engine has the same I/O and interface, making each algorithm look the same when viewed from task engine’s perspective. Also, this standardization makes each algorithm be developed independent from OS of platform, and the system layer above algorithm engine be algorithm independent.

TABLE I S YSTEM PROPERTY COMPARISON . O, E, F, T AND C IMPLY OPENNESS , EXTENSIBILITY, FLEXIBILITY, TRANSPARENCY AND COMMERCIALIZATION , RESPECTIVELY. (N/A STANDS FOR ’N OT A DDRESSED ’.) System Marcenaro et al. [16] SECRETS [17] Avanji et al. [18] Liu et al. [19] VSAM [20] Pfinder [2] W4 [3] Knight [4] IBM S3 [12] Saini et al. [13] Proposed System

O No No No N/A N/A No No No N/A Yes Yes

E N/A N/A Yes N/A N/A No No No Yes Yes Yes

F Yes N/A N/A N/A N/A No No No N/A Yes Yes

T No No No No No No No Yes Yes N/A Yes

C N/A N/A N/A N/A N/A N/A Yes Yes Yes Yes Yes

III. T HE PIL-EYE S YSTEM I MPLEMENTATION The PIL-EYE system is composed of a single main manager and 9 task engines. The whole system is implemented in C++ and uses VXL image plane format [21]. The system operates on a windows PC, but only the task engines related to image acquisition, actuation, and GUI are related with the OS, and therefore can be easily ported to many platforms. The PILEYE system is consisted of the following task engines and video analysis technologies. (Illustrated in Fig. 2.) Image Module: Processes video/image data from various sources to VXL image format for use in the whole system. Since various sources can be used, such as CCTV camera, webcam, video, infrared camera, etc. several different video acquisition technologies are employed in the algorithm level, such as grabber board interface, usb grabber interface, and video decoder. GUI Module: Takes charge of displaying the systems results and acquiring user control for both developers and end-users. In the PIL-EYE system, we have implemented two GUIs. Both are MFC-based GUI engines, one targeted for development, and another targeted for end-users with less computation and with simple GUI. The actual appearances of the GUI system are Fig. 3. Preprocessing Module: Removes noise and enhances the images acquired from the image engine. To remove noises, rain and snow removal filter [22] and other filtering algorithms are employed. Contrast Limited Adaptive Histogram Equalization (CLAHE) based enhancing technique [23] is implemented to enhance the color information and low contrast caused by fog or low light conditions. Also, a stabilizing algorithm [24] is implemented to get rid of the camera shakes from winds or other interferences. Detection Module: Takes charge of detecting moving parts in a video sequence generated by both static and moving cameras. Several algorithms are implemented and developed in the PIL-EYE system. Some well-known background subtraction algorithms [25]–[29] are implemented and new algorithms [30]–[32] have been efficiently developed through direct comparison with other implemented algorithms. Omni-Tracking Module: Tracks a single object within a

(a)

,

3

'

27

%

(b)

,

,&

3

'

27

%

(c)

,

3 3

' '

27 27

% %

* *

(d)

, ,

3 3

' '

27 27

%

*

(e)

,

3

07

%

*

(f)

,

'

' ' ' 27

$

*

(g)

,

3

'

07

*

(h)

,

%

'

27

*

(i)

,

%

* *

(j)

,

3

(k)

%

*

* *

Fig. 4. The architecture of PIL-EYE is very flexible in designing system flow. Block I stands for an image module. Block IC, P, D, OT, MT, B, A and G are image check, preprocessing, detection, omni-tracking, multitracking, behavior, actuation and GUI module respectively. Flow (a) is a typical sequence of intelligent visual surveillance system [1], [42].

video sequence. The omni-tracking engine is usually targeted for use with a PTZ camera. The target can be given by user input, from detection engine result, from a combination of the fore two, or even from some metadata. Especially, using metadata makes direct comparison between implemented algorithms easy, reducing development effort. Several trackers are currently implemented in PIL-EYE system [14], [15], [33]–[35]. Multi-Tracking Module: Uses data association techniques and basic sampling strategy on detection engine results to assign and keep track of moving object within the video sequence. Behavior Module: Analyzes results from other engines, and if necessary, operates on its own to automatically understand the observed video sequence. For this purpose, trajectory analysis using detection and tracking information [36], scene understanding based on topic model [37], [38], face classification [39], object recognition [30] and abandoned object detection [40], are implemented. Image Check Module: Periodically checks whether the current input images are appropriate for video analysis. We have implemented methods for camera covering detection and automatic filter usage determination. Actuation Module: Provides developers with an easy way to plug-in actuation modules and controls the camera to put the omni-tracked object to the center with desired size. Many wide range surveillance applications, such as moving camera detection [32] and PTZ tracking [41], actuate a Pan-Tilt-Zoom (PTZ) motor, and can be easily implemented using this engine. IV. P ROPERTIES OF PIL-EYE A RCHITECTURE The PIL-EYE system architecture has been able to achieve five good properties for visual surveillance algorithm development. Among the properties, openness and extensibility are also mentioned as key desired principles in [12], [13]

(a)

(b)

Fig. 6. (a) Checking the computation time of individual algorithm is possible in PIL-EYE system architecture. (b) The graph shows the number of tracking objects as time flows.

Fig. 5. Configuration windows of PIL-EYE system. In the middle window, activating functional modules are selected. Left(red) window shows detection algorithm selection tab. Activating algorithm and its parameters can be selected and adjusted. Right(black) window is tracking algorithm selection tab.

and flexibility, transparency and commercialization are unique characteristics of our system architecture. Table I shows comparisons of system properties. A. Openness The openness property means that the PIL-EYE system allows integration of any algorithm, regardless of programming language and platform for development. The algorithm engine enables new algorithms to be freely plugged-in. B. Extensibility The functionality of system can be extended freely. The modular structure makes it possible to add a new task module or sensory device. The module also can be extended by adding a new algorithm. C. Flexibility Because all the I/O interfaces of the task engine and algorithm engine are standardized and independent to each other, the system flow can be flexibly designed. This property is very useful for a simultaneous performance comparison of algorithms about the same video input data, and developing an algorithm of a novel pipeline structure. Fig. 4 shows some examples of possible system flows. The activating module and algorithm selection can be selected simply checking dialog boxes (Fig. 5). D. Transparency To develop an optimized system, computational time and memory checking for each module is very important. The PILEYE system can easily measure the computation time of individual algorithms by measuring the time between independent modules (Fig. 6(a)). Also the GUI engine can make a graph of processing data when it is necessary. This helps to see a flow of the algorithm (Fig. 6(b)).

Fig. 7.

Overview of PIL-EYE system GUI.

E. Commercialization Because each module is independent and standardized, it can be plugged-in on any PIL-EYE architecture system. So newly developed algorithm can be easily commercialized as a single module. Also configuring a system as a combination of modules to meet the needs of the user is possible. V. P ERFORMANCE OF PIL-EYE The proposed architecture was validated by applying it for actual development of visual surveillance algorithms. Fig. 7 shows the implemented PIL-EYE system. A. Synergy by Combining Modules Although developing a perfect algorithm for all situations is the ultimate goal of research, but there is no such algorithm yet. The modular structure can provide a suitable visual condition to developing algorithms by combining preprocessing module. Also performance improvement by collaboration of two different algorithm modules for the same video analysis is possible. For example, the preprocessing module reduces false alarms by filtering out noises and enhancing contrast of input video frames Fig. 8, and tracking becomes more robust to occlusion by combining the detection result Fig. 9. B. Simultaneous Performance Comparison The openness and flexibility of our system makes it possible to compare the results of different algorithms about the same video input, simultaneously. These property is very helpful for developing a robust algorithm, and comparing the performance with other existing algorithms. As we can see in Fig. 10, our

(a)

(b)

Fig. 9. (a) shows the result of tracking only, and (b) is the improved result by merging detection result. 450 OAB MIL FRAG Ours

400 350

(a)

RMSE

300

(b)

250 200 150 100 50 0

0

100

200

300

400

500

600

Frame index

(a)

(b)

Fig. 10. Tracking algorithm performance comparison. Our system’s openness and flexibility make it possible to compare tracker’s performance simultaneously. Comparing to other trackers OAB [43], MIL [44] and FRAG [45], the tracker in PIL-EYE system shows more accurate tracking performance. ((b) is drawn by matlab using extracted metadata from the system.) (c)

(d)

Fig. 8. The preprocessing module helps other algorithms’s performance. (a) and (b) show detection results without/with snow removal filtering [22], (c) and (d) are detect/track results without/with CLAHE [23], respectively.

newly developed tracker(pink box) achieves the more stable and accurate tracking performance than other trackers [43]– [45], and it can be easily checked numerically. C. Various Module Pipeline Design Many high-level computer vision research areas, such as scene understanding and motion analysis, are very wide and complex that many different approaches are under developing. Many of the methods are based on the assumption that detection and tracking performs perfectly. Also each approach has its own algorithm flow for processing. Our system is suitable for this flexible data pipelining and algorithm composition. Fig. 11 shows successfully developed abnormal situation detection results. Fig. 11(a) shows a result of abnormal trajectory detection using LDA and HMM method [46]. The trained and tested trajectory data are extracted from detect/track modules. Fig. 11(b) shows a combinatorial abnormal situation detection result of abnormal trajectory, region of interest (ROI) intrusion and abandoned luggage. All the abnormal situations are successfully detected in our system.

much modification. Also, the independent modular stucture makes the system flow flexible and extensible. Any kind of input devices (CCTV camera, PC cam, camcorder, and video information etc.) can be used for video input data for a variety of real-time experiment. With these properties, algorithms in PIL-EYE system created synergy effects so that detection, tracking, and recognition can achieve more robust performance. R EFERENCES [1] DARPA, “Video surveillance and monitoring,” part of DARPA’s Image Understanding for Battlefield Awareness (IUBA) program, 1996. [2] C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Trans. on PAMI, vol. 19, no. 7, pp. 780–785, 1997. [3] I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: Real-time surveillance of people and their activities,” IEEE Trans. on PAMI, vol. 22, no. 8, pp. 809–830, 2000. [4] M. Shah, O. Javed, and K. Shafique, “Automated visual surveillance in realistic scenarios,” IEEE Multimedia, vol. 14, no. 1, pp. 30–39, 2007.

VI. C ONCLUSIONS In this paper, an integrated system for sustainable development of intelligent video surveillance algorithms is introduced. The entire system consists of independent modules for each algorithm for which can be freely plugged in and out without

(a)

(b)

Fig. 11. (a) abnormal trajectory detection and (b) shows a combinatorial abnormal situation detection.

[5] M. Valera and S. A. Velastin, “Intelligent distributed surveillance systems: A review,” in IEE Proceedings - Vision, Image and Signal Processing, 2005. [6] R.-I. Chang, T.-C. Wang, C.-H. Wang, J.-C. Liu, and J.-M. Ho, “Effective distributed service architecture for ubiquitous video surveillance,” Information Systems Frontiers, 2010. [7] R. Cucchiara and G. Gualdi, “Mobile video surveillance systems: An architectural overview,” WMMP 2008, vol. LNCS 5960, pp. 89–109, 2010. [8] N. Haering, P. L. Venetianer, and A. Lipton, “The evolution of video surveillance: an overview,” Machine Vision and Applications, vol. 19, pp. 279–290, 2008. [9] R. Wijnhoven, E. Jaspers, and P. de With, “Flexible surveillance system architecture for prototyping video content analysis algorithms,” in in In Conf. on Real-Time Imaging IX, Proceedings of the SPIE, 2006. [10] M. Delgado, J. Gomez-Romero, P. Magana, and R. Perez-Perez, “A flexible architecture for distributed knowledge based systems with nomadic access through handheld devices,” Expert Systems with Applications, vol. 29, pp. 965–975, 2005. [11] D. Vallejo, J. Albusac, J. J. Castro-Schez, C. Gonzalez, and L. Jimenez, “Towards advanced intelligent surveillance systems: Automatic visual reasoning,” The Knowledge Engineering Review, 2009. [12] Y. li Tian, L. Brown, A. Hampapur, M. Lu, A. Senior, and C. fe Shu, “IBM smart surveillance system (s3): event based video surveillance system with an open and extensible framework,” Machine Vision and Applications, vol. 19, pp. 315–327, 2008. [13] M. Saini, M. Kankanhalli, and R. Jain, “A flexible surveillance system architecture,” in Proceedings of the 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance, 2009, pp. 571–576. [14] D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” IEEE Trans. on PAMI, vol. 25, pp. 564–577, 2003. [15] J. Lim, D. Ross, R.-S. Lin, and M.-H. Yang, “Incremental learning for visual tracking,” in Advances in NIPS, 2005, pp. 793–800. [16] L. Marcenaro, F. Oberti, G. Foresti, and C. Regazzoni, “Distributed architectures and logical-task decomposition in multimedia surveillance systems,” in Proceedings of the IEEE, vol. 89, no. 10, oct 2001, pp. 1419–1440. [17] N. Kodali, C. Farkas, and D. Wijesekera, “Secrets: A secure real-time multimedia surveillance system,” in Second Symposium on Intelligence and Security Informatics, vol. 3073, 2004, pp. 278–296. [18] A. Avanzi, F. Br´emond, C. Tornieri, and M. Thonnat, “Design and assessment of an intelligent activity monitoring platform,” EURASIP J. Appl. Signal Process., vol. 2005, pp. 2359–2374, January 2005. [19] B. Liu, “A live multimedia stream querying system,” in In Proceedings of the 2nd international workshop on Computer Vision Meets Databases, 2005, pp. 35–42. [20] R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L. Wixson, “A system for video surveillance and monitoring,” 2000. [21] The VXL Homepage, http://vxl.sourceforge.net. [22] K. Garg and S. K. Nayar, “Detection and removal of rain from videos,” in in Proc. CVPR 2004, 2004, pp. 528–535. [23] K. Zuiderveld, Contrast limited adaptive histogram equalization. San Diego: Academic Press Professional, 1994, pp. 474–485. [24] S. W. Kim, K. M. Yi, S. Oh, and J. Y. Choi, “Recovery video stabilization using MRF-MAP optimization,” in in Proc. ICPR 2010, 2010, pp. 2804–2807. [25] C. Stauffer and W. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of CVPR, 1999, pp. 246–252. [26] A. Elgammal, R. Duraiswami, D. Harwood, L. S. Davis, R. Duraiswami, and D. Harwood, “Background and foreground modeling using nonparametric kernel density for visual surveillance,” in Proceedings of the IEEE, 2002, pp. 1151–1163. [27] C.-C. Chang, T.-L. Chia, and C.-K. Yang, “Modified temporal difference method for change detection,” Optical Engineering, vol. 44, no. 2, 2005. [28] D.-S. Lee, “Effective gaussian mixture learning for video background subtraction,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 27, 2005. [29] D.-Y. Lee, J.-K. Ahn, and C.-S. Kim, “Fast background subtraction algorithm using two-level sampling and silhouette detection,” in Proc. of ICIP, 2009, pp. 3177–3180.

[30] J. H. Choi, “Moving object detection and classification method for algorithm embedded surveillance camera,” Master’s thesis, Seoul National University, Republic of Korea, 2008. [31] J. Choi, “Robust moving object detection to various illumination conditions,” Ph.D. dissertation, Seoul National University, Republic of Korea, 2010. [32] Y. M. Baek, “Adaptive background estimation considering neighborhood pixels in moving camera,” Master’s thesis, Seoul National University, Republic of Korea, 2009. [33] J. Lim, D. Ross, R. Lin, and M. Yang, “Incremental learning for visual tracking,” in Advances in NIPS. MIT Press, 2004, pp. 793–800. [34] D. Bolme, J. Beveridge, B. Draper, and Y. Lui, “Visual object tracking using adaptive correlation filters,” in Proceedings of CVPR, Jun. 2010. [35] K. M. Yi, S. W. Kim, and J. Y. Choi, “Orientation and scale invariant kernel-based object tracking with probabilistic emphasizing,” in ACCV, 2009, pp. 130–139. [36] A. Basharat, A. Gritai, and M. Shah, “Learning object motion patterns for anomaly detection and improved object detection,” in IEEE Conf. of CVPR, Jun. 2008. [37] X. Wang, X. Ma, and E. Grimson, “Unsupervised activity perception by hierarchical bayesian models,” in In Proc. CVPR, 2007. [38] W. Sultani, “Abnormal traffic detection using intelligent driver model,” Master’s thesis, Seoul National University, Republic of Korea, 2010. [39] W. S. Kang and J. Y. Choi, “Kernel machine for fast and incremental learning of face,” in SICE-ICASE, Oct. 2006. [40] Y. E. Moon, “Abandoned object detection using adaptive background model,” Master’s thesis, Seoul National University, Republic of Korea, 2010. [41] K. M. Yi and J. Y. Choi, “Robust object tracking using PTZ camera,” in Summer Conference of IEEK, 2010. [42] A. Senior, “An introducton to automatic video surveillance,” pp. 1–9. [43] H. Grabner, M. Grabner, and H. Bischof, “Real-time tracking via on-line boosting,” in Proceedings British Machine Vision Conference (BMVC), vol. 1, 2006, pp. 47–56. [44] B. Babenko, M.-H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in Computer Vision and Pattern Recognition (CVPR), 2009. [45] A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in Computer Vision and Pattern Recognition (CVPR), Jun. 2006. [46] H. Jeong, “Unsupervised motion learning for abnormal behavior detection in visual surveillance,” Master’s thesis, Seoul National University, Republic of Korea, 2011.

PIL-EYE: Integrated System for Sustainable ...

by using a plug and play framework for video analytics has been developed .... understanding based on topic model [37], [38], face classi- fication [39], object ...

2MB Sizes 1 Downloads 195 Views

Recommend Documents

Sustainable - Integrated Farming Solu4ons An ... -
Oct 13, 2014 - 1500+. 200+. Children. Educa4on - Computer, English, Math, Yoga, Arts. Farmers. Supply Chain, Solar pumps. 150+. Villages Electrified. 250+.

integrated and non integrated accounting system pdf
Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying... Download. Connect more ...

42.An Integrated System for Regional Environmental Monitoring and ...
An Integrated System for Regional Environmental M ... oring and Management Based on Internet of Things.pdf. 42.An Integrated System for Regional ...

An Integrated Labor-Management System for Taco Bell
trial engineering consulting services, and enlisted the services of data-entry and pro- gramming personnel. Howard Frantz and. William Swart held the project's ...

System-Level Integrated Server Architectures for Scale ...
Dec 7, 2011 - datacenters grow larger and cloud computing scale-out work- ... technology-level effects of SoC integration for datacen- ter servers. We identify ...

An Integrated Labor-Management System for Taco Bell
PRCXJRAMMING—INTEGER, APPLICATIONS. FORECASTING—APPLICATIONS. INTERFACES 28:1 January-Eebruary 1998 (pp. 75-91). Page 2. HUETER, SWART consequence of a long process of trial and error, based occasionally on adapting ... sold, it is not possible t

Integrated phosphorus nutrition system for blackgram
Abstract: Studies on the effect of phosphatic fertilizer alone and in combination with organics and inoculants on black gram-ragi sequence were conducted at the Agricultural. Engineering College and Research Institute Farm, Kumulur, Trichirappalli du

Automated computer integrated manufacturing system 2013.pdf ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Automated ...

Probabilistic Optimization of Integrated Thermal Protection System
Sep 12, 2008 - Page 1 ... 12th AIAA/ISSMO Multidisciplinary Analysis and Optimization ... Probabilistic structural optimization is expensive because repeated ...

TobeSahredICT FOR SUSTAINABLE DEVELOPMENT Vanuatu.pdf ...
Kontak : Sdr. Nur Q, 0857-33-484-101. Email : [email protected]. Website : http://sites.google.com/site/masjidillah. Blog : masjidillah.wordpress.com. Page 3 of 1,072. TobeSahredICT FOR SUSTAINABLE DEVELOPMENT Vanuatu.pdf. TobeSahredICT FOR SUSTA

pdf-149\sustainable-markets-for-sustainable-business-a-global ...
... the apps below to open or edit this item. pdf-149\sustainable-markets-for-sustainable-business- ... ncial-markets-finance-governance-and-sustainabili.pdf.

Hydrogen futures: toward a sustainable energy system
Aug 30, 2000 - Hydrogen futures: toward a sustainable energy system. Seth Dunn∗ ..... distributed through a network of pipes that is less conspic- uous, more e cient, and .... will be shaped to a much greater degree by environmental issues as well

PDF Epub Toyota Production System: An Integrated ...
Exploring the latest developments in the Toyota Production System (TPS) framework at ... computer-based information systems, and innovative solutions to common ... Taiichi Ohnos Workplace Management: Special 100th Birthday Edition.

An integrated approach to trading system development ...
PDF DOWNLOAD Quantitative Technical Analysis: An integrated approach to trading system development and trading management eBooks. Textbooks By #A#.

Data Warehousing in an Integrated Health System ...
operational data versus statistical data for decision support are outlined in Table3. Decision-makers use software tools that can broadly be divided into reports ...

The US Integrated Ocean Observing System in a ... - Ingenta Connect
The mission of the U.S. Integrated Ocean Observing System (IOOS®) is to de- ... that can be derived in whole or in part from IOOS data and information are ...