IJRIT International Journal of Research in Information Technology, Volume 1, Issue 7, July 2014, Pg. 270-277

International Journal of Research in Information Technology (IJRIT)

www.ijrit.com

ISSN 2001-5569

Character Identification in Movie Using Movie Script Prashanth Gowda P L, Dr. S A Angadi, Henin Roland Karkada, Meenakshi S.R, Akarsh R Kapasi M.Tech Student, Department of Computer Science and Engineering Visvesvaraya Technological University Belgaum, Karnataka, India [email protected] Professor, Department of Computer Science and Engineering VTU PG Studies Belgaum, Karnataka, India [email protected] M.Tech Student, Department of Computer Science and Engineering Visvesvaraya Technological University Belgaum, Karnataka, India [email protected] M.Tech Student, Department of Bioinformatics Engineering Dayananda Sagar College of Engineering, Karnataka, India [email protected] M.Tech Student, Department of Computer Science and Engineering Center for P.G Studies VTU, Belgaum, Karnataka, India [email protected]

Abstract Identification of characters in videos has drawn a critical examination investment and prompted numerous fascinating requisitions. Face detection is carried out utilizing robust Viola-Jones algorithm and recognition of faces by principal component analysis system. Face tracks are bunched utilizing k-means, which helps in noise lessening where the amount of clusters is situated as the amount of different speakers. Co-event of names in script and face clusters in video constitutes the comparing face graph and name graph.p Conventional global matching framework is altered utilizing ordinal graphs for robust representation and eigenvalues of adjacency matrices is utilized for graph matching.

Keywords: Affinity Graph, Character Identification, Graph matching, K-means clustering, Ordinal Graph.

1. Introduction 1.1. Objective and Motivation Automatic character identification [1] in movies is essential for semantic movie analysis such as movie indexing, summarization and retrieval. Character identification, though very intuitive to humans is a tremendously challenging task in computer vision. This is due to the introduced noises of a huge variation in appearance of characters such as: 1) Pose variation - Uncontrolled cameras can record non-ideal face shots from a variety of angles causing the correspondences between pixel location and points on the face to differ from image to image.

Prashanth Gowda P L,IJRIT

270

IJRIT International Journal of Research in Information Technology, Volume 1, Issue 7, July 2014, Pg. 270-277

2) Illumination variation - An individual may pass underneath lights with a range of relative positions and intensities throughout the course of one or more videos, so that the surface of the face appears different at different times. 3) Expression variation in the appearance of the face changes as the facial expression varies. 4) Scale variation - The face will occupy larger or smaller areas in the video frames as it moves towards or away from the camera, and in the worst case the spatial resolution of the face can decrease to the point where it becomes unrecognizable. Spatial resolution can also depend on the properties of the camera such as the depth of field of its lens. 5) Motion blur is significant blur can obscure the face if the camera exposure time is set too long or the head moves rapidly. 6) Occlusion - objects in the environment can block parts of the face, making the tasks of recognizing a face and distinguishing it from the background more difficult. The textual cues like cast lists, scripts, subtitles and closed captions are usually exploited. Fig.1 shows an example used in our experiment.

Fig.1 Example of character identification from sample video.

1.2 Related Work The crux of the character identification problem [2] is to exploit the relations between videos and the associated texts in order to label the faces of characters with names. First name-face association system proposed [3] for news videos, which is based on the co-occurrence between the detected faces and names extracted from the transcript. However in TV and movies, the names of characters seldom directly show in the subtitle and script containing character names have no time stamps to align to the video. Since without local time information the task of character identification is called as global face name graph matching between the faces detected from the video and the names extracted from the movie script. When compared with local matching, global statistics is used for name-face association, which enhances the robustness of the algorithms. The contributions of this work include: 1) A noise-insensitive face clustering 2) A noise-insensitive relationship representation method is introduced to construct the name/face affinity graph. 3) Ordinal graph representation removes noises from video. 4) An eigenvalue based graph matching algorithm is presented for face-name graph matching. According to the utilized textual cues, the existing movie character identification method is roughly divided into three categories. 1) Category 1: Cast List Based These methods only utilize the case list textual resource. In the “cast list discovery” problem, faces are clustered by appearance and faces of a particular character are expected to be collected in a few pure clusters. Names for the clusters are then manually selected from the cast list. Ramanan [4] et al. proposed to manually label an initial set of face clusters and further cluster rest of the face instances based on clothing within scenes. The authors have addressed the problem of finding particular characters by building a model/classifier of the character’s appearance from user-provided training data. The character names in the cast are used as queries to search face images and constitute gallery set. The probe face tracks in the movie are then identified as one of the characters by multitask joint sparse representation and classification. Recently metric learning is introduced into character identification in uncontrolled videos. Cast-specific metrics is adapted to the people appearing in a particular video in an unsupervised manner. The clustering as well as identification performance is demonstrated to be improved. These cast list based methods are easy for understanding and implementation. However without the other textual cues, they either need Prashanth Gowda P L,IJRIT

271

IJRIT International Journal of Research in Information Technology, Volume 1, Issue 7, July 2014, Pg. 270-277

manual labeling or guarantee no robust clustering and classification performance due to the large intra-class variances. 2) Category 2: Subtitle or Closed Caption, Local Matching Based Subtitle and closed caption provide time-stamped dialogues, which can be exploited for alignment to the video frames. Everingham [8] proposed to combine the film script with the subtitle for local face-name matching. Time-stamped name annotation and face exemplars are generated. The rest of the faces were then classified into these exemplars for identification. They further extended their work by replacing the nearest neighbor classifier by multiple kernel learnings for features combination. In the new framework, non-frontal faces are handled and the coverage is extended. Researchers from the University of Pennsylvania utilized the readily available time-stamped resource, the closed caption, which is demonstrated to be more reliable than OCR-based subtitles. They investigated on the ambiguity issues[6] in the local alignment between video, screenplay and closed captions. A partially-supervised multiclass classification problem is formulated. Recently, they attempted to address the character identification problem without the use of screenplay. The reference cues in the closed captions are employed as multiple instance constraints and face tracks grouping as well as face-name association are solved in a convex formulation. The local matching based methods require the time-stamped information, which is either extracted by OCR (i.e., subtitle) or unavailable for the majority of movies and TV series (i.e. closed caption). Besides, the ambiguous and partial annotation makes local matching based methods more sensitive to the face detection and tracking noises. 3) Category 3: Script/Screenplay, Global Matching Based Global matching based methods open the possibility of character identification without OCR-based subtitle or closed caption. Since it is not easy to get local name cues, the task of character identification is formulated as a global matching problem. Our method belongs to this category and can be considered as an extension to Zhang’s work[10]. In movies, the names of characters seldom directly appear in the subtitle, while the movie script which contains character names has no time information. Without the local time information, the task of character identification is formulated as a global matching problem between the faces detected from the video and the names extracted from the movie script. Compared with local matching, global statistics is used for name-face association, which enhances the robustness of the algorithms.

2. Face Detection

Fig.2 Architecture of character identification in movie The workflow of the design is as shown in the Fig. 2, and starts with face detection using Viola-Jones algorithm. The basic principle of the Viola-Jones algorithm is to scan a sub-window capable of detecting faces across a given input image. The standard image processing approach would be to rescale the input image to different sizes and then run the fixed size detector through these images. This approach turns out to be rather time consuming due to the calculation of the different size images. Contrary to the standard approach Viola-Jones rescale the detector instead of the input image and run the detector many times through the image – each time with a different size. At first one might suspect both approaches to be equally time consuming, but Viola-Jones have devised a scale invariant detector that requires the same Prashanth Gowda P L,IJRIT

272

IJRIT International Journal of Research in Information Technology, Volume 1, Issue 7, July 2014, Pg. 270-277

number of calculations whatever the size. This detector is constructed using a so-called integral image and some simple rectangular features reminiscent of Haar wavelets. The next section elaborates on this detector.

3. Face Recognition Much of the previous work on automated face recognition has ignored the issue of just what aspects of the face stimulus are important for identification, assuming that predefined measurements were relevant and sufficient. This suggested that an information theory approach of coding and decoding face images may give insight into the information content of face images, emphasizing the significant local and global “features”. Such features may or may not be directly related to our intuitive notion of face features such as the eyes, nose, lips, and hair. In the language of information theory, relevant information in a face image has to be extracted, encode it as efficiently as possible and one face encoding must be compared with a database of models encoded similarly. A simple approach to extract the information contained in the image of a face is to somehow capture the variation in a collection of face images, independent of any judgments of features, and use this information to encode and compare individual face images. In mathematical terms, we wish to find the principal components of the distribution of faces, or the eigenvectors of the covariance matrix of the set of face images. These eigenvectors can be thought of as a set of features which together characterize the variation between face images. Each image location contributes more or less to each eigenvector, so that the eigenvector is displayed as a sort of a ghostly face which we call an Eigen face. Each face image in the training set can be represented exactly in terms of a linear combination of the Eigen faces. The number of possible Eigen faces is equal to the number of face images in the training set. However the faces can also be approximated using only the “best” Eigen faces - those that have the largest eigenvalues, and which therefore accounts for the most variance within the set of face images. The primary reason for using fewer Eigen faces is computational efficiency. The best M’ Eigen faces span an M’dimensional subspace “face space” of all possible images. As sinusoids of varying frequency and phase are the basis functions of a Fourier decomposition (and are in fact Eigen functions of linear systems), the Eigen faces are the basis vectors of the Eigen face decomposition. The idea of using Eigen faces was motivated by a technique developed by Sirovich and Kirby for efficiently representing pictures of faces using principal component analysis. They argued that a collection of face images can be approximately reconstructed by storing a small collection of weights for each face and a small set of standard pictures. It occurred to us that if a multitude of face images can be reconstructed by weighted sums of a small collection of characteristic images, then an efficient way to learn and recognize faces might be to build the characteristic features from known face images and to recognize particular faces by comparing the feature weights needed to (approximately) reconstruct them with the weights associated with the known individuals. The following steps summarize the recognition process: i) Initialization: Acquire the training set of face images and calculate the Eigen faces, which define the face space. ii) When a new face image is encountered, calculate a set of weights based on the input image and the M Eigen faces by projecting the input image onto each of the Eigen faces. iii) Determine if the image is a face at all (whether known or unknown) by checking to see if the image is sufficiently close to “face space.” iv) If it is a face, classify the weight pattern as either a known person or as unknown. v) (Optional) If the same unknown face is seen several times, calculate its characteristic weight pattern and incorporate into the known faces (i.e. learn to recognize it).

4. Affinity Graph Representation In a movie, the interactions among characters resemble them into a relationship network. Cooccurrence of names in script and faces [8] in videos can represent such interactions. Affinity graph is built according to the co-occurrence status among characters as shown in Table1 and Table 2, which can be represented as a weighted graph where vertex denotes the characters and edge denotes relationships among them. The more scenes where two characters appear together, the closer they are, and the larger the edge weights between them are. In this sense a name affinity graph from script analysis and a face affinity graph Prashanth Gowda P L,IJRIT

273

IJRIT International Journal of Research in Information Technology, Volume 1, Issue 7, July 2014, Pg. 270-277

from video analysis can be constructed. It can be seen that some of the face affinity values differ much from the corresponding name affinity values due to the introduced noises. Subsequently character identification is formulated as the problem of finding optimal vertex to vertex matching between two graphs.

FACE1 FACE2 FACE3 FACE4 FACE5

BASAVRAJ CHANDRU GOWDA NATESH VIJAY

FACE1 1.0000 0.0884 0 0.7550 0.2329

Table 1 Face Affinity Matrix FACE2 FACE3 0.0884 0 0.0884 0 0 0 0.0884 0 0.0884 0

FACE4 0.7550 0.0884 0 0.7550 0.2329

FACE5 0.2329 0.0884 0 0.2329 0.2329

BASAVRAJ 0.6667 0.1667 0.0833 0 0.6667

Table 2 Name Affinity Matrix CHANDRU GOWDA 0.1667 0.0833 0.1667 0.0833 0.0833 0.0833 0 0 0.1667 0.0833

NATESH 0 0 0 0 0

VIJAY 0.6667 0.1667 0.0833 0 1.0000

5. Ordinal Graph Representation and Face Name Graph Matching The name affinity graph and face affinity graph are built based on the co-occurrence relationship as shown in Table 3 and Table 4. Due to the imperfect face detection and tracking results, the face affinity graph can be seen as a transform from the name affinity graph by affixing noises. It is observed in our investigations that, in the generated affinity matrix some statistic properties of the characters are relatively stable and insensitive to the noises, such as character A has more affinities with character B than C, character D has never co-occurred with character A etc. Delighted from this an assumption is made that while the absolute quantitative affinity values are changeable, the relative affinity relationships between characters (e.g. A is more closer to B than to C) and the qualitative affinity values (e.g., whether D has co-occurred with A) usually remain unchanged. In this paper, the preserved statistic properties are utilized and a proposal to represent the character co- occurrence in rank order is made.

FACE1 FACE2 FACE3 FACE4 FACE5

BASAVRAJ CHANDRU GOWDA NATESH VIJAY

FACE1 5 2 1 4 3

Table 3 Face ordinal affinity matrix FACE2 FACE3 2 1 2 1 1 1 2 1 2 1

FACE4 4 2 1 4 3

FACE5 3 2 1 3 3

BASAVRAJ 4 3 2 1 4

Table 4 Name ordinal affinity matrix CHANDRU GOWDA 3 2 3 2 2 2 1 1 3 2

NATESH 1 1 1 1 1

VIJAY 4 3 2 1 5

Prashanth Gowda P L,IJRIT

274

IJRIT International Journal of Research in Information Technology, Volume 1, Issue 7, July 2014, Pg. 270-277

Using Eigen values and Eigen vectors of weighted adjacency matrices are used to match face and name Video No. of characters Correct named Accuracy 1 5 5 100% 2 3 2 66.66% 3 2 2 100% graph. Eigen values are sorted in descending order and then rank is assigned to the value of a vector corresponding to the maximum vector.

6. Results Accuracy level changes based on the complexity level of video and the number of characters present in the video. In Table 5 and Fig. 3 it clearly shows that accuracy level changes depending on background, pose and illumination of video. Variation of accuracy level is calculated using Eq. (1).

Table 5 Accuracy of character identification

Fig. 3 Accuracy of character identification Performance of video of variable length is shown in Table 6 and Fig.4. It clearly indicates that as the number of frame increases execution time also increases. It is taking more time in order to read more frames. Since Eigen faces are used for feature extraction, execution time decreases compared to existing system. Table 6 Performance of different length videos Length of video(in seconds) Execution time(in seconds) 45 538.3734s 11 189.7468s 9 154.1202s

Prashanth Gowda P L,IJRIT

275

IJRIT International Journal of Research in Information Technology, Volume 1, Issue 7, July 2014, Pg. 270-277

Fig. 4 Performance of different length videos

7. Conclusion and Future Scope It is shown that the proposed approach is useful to improve results for clustering and identification of faces extracted from the video. Since PCA is used for feature extraction, computational risks are reduced which increases the performance. Ordinal graph representation provides more robustness for character identification by removing noises present in the video. Graph matching technique provides accuracy in identifying character present in the video. In future, the optimal functions for different movie genres can be investigated. Mining of characters can be done. In face-name association, some useful information such as gender and context information will be integrated to refine the matching result. In order to provide more robustness in character identification kmedoid clustering can be used. More information about characters can be provided in a script which makes easy in identifying characters present in the video.

8. References [1] Jitao Sang and Changsheng Xu, “Robust Face-Name Graph Matching for Movie Character Identification”, IEEE [2] T. Cour, C. Jordan, E. Miltsakaki, and B. Taskar, “Movie/script: Alignment and parsing of video and text transcription,” in Proc. ECCV, 2008, pp. 158–171. [3] C. Liang, C. Xu, J. Cheng, and H. Lu, “Tv parser: An automatic TV video parsing method,” in CVPR, 2011,pp. 3377–3384. [4] J. Sang and C. Xu, “Character-based movie summarization,” in ACM MM, 2010. [5] R. Hong, M. Wang, M. Xu, S. Yan, and T.-S. Chua, Dynamic captioning: video accessibility enhancement for hearing impairment, in ACM Multimedia, 2010, pp. 421–430. [6] T. Cour, B. Sapp, C. Jordan, and B. Taskar, Learning from ambiguously labeled images, in CVPR, 2009, pp. 919–926. [7] J. Stallkamp, H. K. Ekenel, and R. Stiefelhagen, “Video-based face recognition on real-world data.” In ICCV, 2007, pp. 1–8. [8] S. Satoh and T. Kanade, Name-it: Association of face and name in video, in Proceedings of CVPR, 1997, pp. 368–373. [9] T. L. Berg, A. C. Berg, J. Edwards, M. Maire, R. White, Y. W. Teh, E. G. Learned-Miller, and D. A. Forsyth, “Names and faces in the news,” in CVPR, 2004, pp. 848–854. [10] J. Yang and A. Hauptmann, “Multiple instance learning for labelling faces in broadcasting news video,” in ACM Int. Conf. Multimedia, 2005, pp. 31–40. [11] A. W. Fitzgibbon and A. Zisserman, On affine invariant clustering and automatic cast listing in movies, in ECCV (3), 2002, pp. 304–320.

Prashanth Gowda P L,IJRIT

276

IJRIT International Journal of Research in Information Technology, Volume 1, Issue 7, July 2014, Pg. 270-277

Authors Profile .

Prashanth Gowda P L is currently pursuing M.Tech in Computer Science at VTU PG Studies,(VTU),Belgaum. He received his Bachelor of Engineering in Computer Science from Acharya Institute of Technology (AIT) Bangalore. His areas of interests include Video and Image processing in multimedia system, Cloud Computing and Biometric systems.

Dr. S A Angadi is currently working as a Professor in Dept. of Computer Science and Engineering, Visvesvaraya Technological University, Belgaum. His recent interests include Image Processing, Intelligent Systems, Graph Theory and Internet of Things.

Henin Roland Karkada is basically from Udupi, Karnataka and currently he is pursuing his Masters of Technology in Computer Science at Center for Postgraduate Studies, VTU, Belgaum. He received his Bachelor of Engineering in Computer Science from Mangalore Institute of Technology (MITE) Mangalore. His research interests include Image Processing, Cloud Computing and Semantic Web

Prashanth Gowda P L,IJRIT

277

Character Identification in Movie Using Movie Script - IJRIT

M.Tech Student, Department of Computer Science and Engineering ... Names for the clusters are then manually selected from the cast list. ..... Video and Image processing in multimedia system, Cloud Computing and Biometric systems.

409KB Sizes 1 Downloads 233 Views

Recommend Documents

Character Identification in Movie Using Movie Script - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 1, Issue ..... In this paper, the preserved statistic properties are utilized and a proposal to .... processing in multimedia system, Cloud Computing and Biometric systems.

the notebook movie script pdf
Page 1 of 1. File: The notebook movie script pdf. Download now. Click here if your download doesn't start automatically. Page 1 of 1. the notebook movie script ...

easy a movie script pdf
Page 1 of 1. File: Easy a movie script pdf. Download now. Click here if your download doesn't start automatically. Page 1 of 1. easy a movie script pdf. easy a movie script pdf. Open. Extract. Open with. Sign In. Main menu. Displaying easy a movie sc

deadpool movie script pdf
File: Deadpool movie script pdf. Download now. Click here if your download doesn't start automatically. Page 1 of 1. deadpool movie script pdf. deadpool movie ...

the notebook movie script pdf
Page 1 of 1. the notebook movie script pdf. the notebook movie script pdf. Open. Extract. Open with. Sign In. Main menu. Displaying the notebook movie script ...

Human Identification of Letters in Mixed-Script Handwriting: An Upper ...
Canada. She is now with the Natural Language Laboratory, Simon Fraser. University ... R. Plamondon was with the Laboratoire Scribens, Département de Génie.

the lucky one movie script pdf
Download now. Click here if your download doesn't start automatically. Page 1 of 1. the lucky one movie script pdf. the lucky one movie script pdf. Open. Extract.

Watch My Script Doctor (1997) Full Movie Online Free ...
Watch My Script Doctor (1997) Full Movie Online Free .Mp4_____________.pdf. Watch My Script Doctor (1997) Full Movie Online Free .Mp4_____________.

Watch My Script Doctor (1997) Full Movie Online Free ...
Watch My Script Doctor (1997) Full Movie Online Free .Mp4_____________.pdf. Watch My Script Doctor (1997) Full Movie Online Free .Mp4_____________.

Species Identification using MALDIquant - GitHub
Jun 8, 2015 - Contents. 1 Foreword. 3. 2 Other vignettes. 3. 3 Setup. 3. 4 Dataset. 4. 5 Analysis. 4 .... [1] "F10". We collect all spots with a sapply call (to loop over all spectra) and ..... similar way as the top 10 features in the example above.

HMM-based script identification for OCR - Research at Google
be specified [4], but to the best of our understanding doing so in current .... to distinguish character classes (or here, script classes in ..... IEEE Computer Society.

A Simple Distributed Identification Protocol for Triplestores - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 1, Issue 11, ... OAuth is associate degree open customary for authorization.

Identification of Enablers of Poka-Yoke: A Review - IJRIT
Keywords: Indian Manufacturing Industries, Poka-Yoke, Quality, ... method is to eliminate human errors in manufacturing process and management as a result of ...

A Novel Technique to Control Congestion in MANET using ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 1, Issue .... Tech degree in from DAV, Jalandhar and completed B-Tech in 2005 with ...

Data sharing in the Cloud using Ensuring ... - IJRIT
Sep 9, 2013 - where software objects that offer sensitive functions or hold sensitive data are responsible for protecting .... Log files should be reliable and tamper proof to avoid illegal insertion, deletion, and ..... attacker erase or tamper a re

A Novel Technique to Control Congestion in MANET using ... - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 1, Issue 7, ... topology. 2. Congestion Control in MANET. To maintain and allocate network .... Tech degree in from DAV, Jalandhar and completed B-Tech in 2005 with honours fro

On Damage Identification in Civil Structures Using ...
Damage identification is a key problem in SHM. It is classified by ... cluding chemistry, neuroscience, social network analysis and computer vision [1,. 10]. ... Sun et al. [16] proposed different methods on dynamically updating compo- nent matrices

A Simple Distributed Identification Protocol for Triplestores - IJRIT
social network graph victimisation existing techniques. .... III. Distributed Identification Mechanism for Triplestores. This part we discuss the idea of using the ...

Identification of Enablers of Poka-Yoke: A Review - IJRIT
IJRIT International Journal of Research in Information Technology, Volume 1, Issue 8, ... application of this tool, errors are removed in production system before they produce ... Chase and Stewart state that Poka-Yoke involves a three steps process

A Simple Distributed Identification Protocol for Triplestores - IJRIT
applications access to user online private data to their server resources without sharing their credentials, using user-agent redirections. In this paper defines a simple ... the employment and unleash of specific information, like money or medical d

A Lane Departure eparture eparture Identification based on ... - IJRIT
Self-clustering algorithm, fuzzy C-mean and fuzzy rules were used to ..... linear regression, Computer Vision and Image Understanding 99 (2005) 359–383.