Capturing Complementary Information via Reversed Filter Bank And parallel implementation with MFCC for improved Text-Independent Speaker Identification Sandipan Chakroborty, Anindya Roy, Sourav Majumdar and Goutam Saha Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, India, Kharagpur-721 302 Email:{mail2sandi, anindya ici, yomazoom}@gmail.com, [email protected]

Abstract A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system have been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This work proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature improves performance baseline of MFCC based system. The proposition is validated by experiments conducted on two different kinds of databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with two different classifier paradigms, namely Gaussian Mixture Models (GMM) and Polynomial Classifier (PC) and for various model orders.

1

Introduction

Any Speaker Identification [1] system needs a robust acoustic feature extraction technique as a front-end block followed by an efficient modeling scheme for generalized representation of these features. MFCC [2] has been widely accepted as such a front-end for a typical SI application as it is less vulnerable to noise perturbation, gives little session variability and is easy to extract. But MFCC was first proposed for speech recognition [2] to identify monosyllabic words in continuously spoken sentences and not for

SI. Also, calculation of MFCC is based on the human auditory system aiming for artificial implementation of the ear physiology assuming that the human ear can be a good speaker recognizer too. However, no conclusive evidence exists to support the view that the ear is necessarily the best speaker recognizer. Further, computation of MFCC involves averaging the low frequency region of the energy spectrum ( approximately demarcated by the upper limit of 1 kHz ) by closely spaced overlapping triangular filters while smaller number of less closely spaced filters with similar shape are used to average the high frequency zone. Thus MFCC can represent the low frequency region more accurately than the high frequency region and hence it can capture formants which lie in the low frequency range and which characterize the vocal tract resonances [3]. However, other formants [3] can also lie above 1 kHz and these are not effectively captured by the larger spacing of filters in the higher frequency range. All these facts suggest that any SI system based on MFCC can possibly be improved. In this work, we extract a new feature set from the speech signal which yields information that is complementary in nature to the human vocal tract characteristics described by MFCC. This makes it very suitable to be used with a parallel classifier [4] to yield higher accuracy in SI problem. We propose to invert the entire filter bank structure such that the higher frequency range is averaged by more accurately spaced filters and a smaller number of widely spaced filters are used in the lower frequency range. We calculate a new feature set named Inverted Mel Frequency Cepstral Coefficients (IMFCC) following the same procedure as normal MFCC but using this reversed filter bank structure. This effectively captures those high frequency formants ignored by the original MFCC. Further, compared to high level features used in [5], we will show that little extra computational burden is incurred in the calculation of this complementary feature set, which can be efficiently used if we go for parallel

implementation with MFCC. The importance of MFCC in SI cannot be understated. In order to exploit the best of both paradigms, we model two separate parallel classifiers using these two feature sets namely MFCC and IMFCC and fuse their scores to obtain the final classification decision. Viewed in another manner, we aim to reinforce the score generated by the MFCC based model by another score from a complementary source of information. Two different classifiers namely GMM [6], and PC [7] are developed, where the former use unsupervised clustering techniques while the later one involves discriminative modeling of speakers. Since the classifiers are totally independent and modeled in parallel, the order of complexity is equal to that for a single MFCC based classifier. It is shown in the Result section that such parallel classifiers perform considerably better in all cases compared to a single classifier based on MFCC.

2

Mel Frequency Cepstral Coefficients and their Calculation

According to psychophysical studies, human perception of the frequency content of sounds follow a subjectively defined nonlinear scale called the “mel” scale [2] (fig.1) defined as, f  fmel = 2595 log10 1 + (1) 700 where f is the actual frequency in Hz. This leads to the definition of MFCC, a baseline [4] acoustic feature for Speech and Speaker Recognition applications, which can be calculated as follows. Let {y(n)}N n=1 represent a frame of speech that is preemphasized and Hamming-windowed. First, y(n) is converted to the frequency domain by an M point DFT and the resulting energy spectrum can be written as M X y(n) · e |Y (k)|2 =

−j2πnk M

n=1

 2

(2)

where, 1 ≤ k ≤ M . Next, triangular filter banks, that are linearly spaced in the mel scale, are imposed on the spectrum. The outputs {e(i)}Q i=1 of the mel-scaled band-pass filters can be calculated by a weighted summation between respective filter response ψi (k) and the energy spectrum |Y (k)|2 as e(i) =

M X

written as,     (Q−1) 2l − 1 π 2 X Cm = log[e(l + 1)] · cos m · · Q 2 Q l=0 (4) where, 0 ≤ m ≤ R − 1, R is the desired number of cepstral features. r

3

The Inverted Mel Frequency Cepstral Coefficient

Although MFCC presents a way to convert a physically measured spectrum of speech into a perceptually meaningful subjective spectrum based on the human auditory system, it is not certain that the human ear and hence MFCC is optimized for SI. Here we propose a new scale, the Inverted Mel Scale (fig.1) defined by a competing filter bank structure which is indicative of a hypothetical auditory system which has followed a diametrically opposite path of evolution than the human auditory system. The idea is to capture those information which otherwise could have been missed by original MFCC. The new filter bank structure, which is obtained by inverting the original filter bank, is defined by the following filter response :Ms − k + 1) ψˆi (k) = ψQ−i+1 ( 2

(5)

where ψi (k) is the original Mel Scale filter response. Such a filter bank (fig.2) corresponds to a subjectively defined scale, the Inverted Mel scale where the pitch is given by   4031.25 − f ˆ fmel (f ) = 2195.286 − 2595 log10 1 + 700 (6) In this scale, pitch increases more and more rapidly (fig.1) as the frequency increases. This is in direct contrast to the human auditory system (1), where it increases less rapidly with rising frequency. Hence, the higher frequency zone coarsely approximated by normal MFCC can be represented more finely by this new scale. We find the filter outputs {ˆ e(i)}Q i=1 in the same way as MFCC from the same energy spectrum |Y (k)|2 as, eˆ(i) =

M X

|Y (k)|2 · ψˆi (k)

(7)

k=1

|Y (k)|2 · ψi (k)

(3)

k=1

Finally, DCT is taken on the log filter bank energies {log[e(i)]}Q i=1 and the final MFCC coefficients Cm can be

Computational burden is reduced since we do not need to recalculate the energy spectrum |Y (k)|2 (fig.3) when we go for parallel classifiers one using MFCC and the other using IMFCC. Finally, DCT is taken on the log filter bank

MFCC Filter bank o/p 2500 Mel scale Inverted mel scale

MFCC Filter Bank

Pitch (Mels) →

2000

Speech Signal

1500

Pre-processing

FFT | |2

DCT

MFCC

log 10 ( )

DCT

IMFCC

Filter bank Reversal IMFCC Filter Bank

1000

0

a

0

500

1000

1500

2000

2500

3000

3500

4000

(Cˆ ) m

bT .a

b 500

(C m)

log 10 ( )

IMFCC Filter bank o/p

Figure 3. Figure showing extraction of MFCC and IMFCC features.

Frequency (Hz)→

Figure 1. Subjective Pitch vs Frequency for Mel Scale and proposed Inverted Mel Scale

Filter Bank structure of canonical MFCC

Relative Amplitude →

1

0.8

0.6

0.4

0.2

0

0

500

1000

1500

2000

2500

3000

3500

4000

3000

3500

4000

Frequency (Hz)→ Filter Bank structure of Inverted MFCC

Relative Amplitude →

1

0.8

0.6

0.4

0.2

0

0

500

1000

1500

2000

2500

Frequency (Hz)→

Figure 2. Plot showing filter bank structures for the two systems.

energies {log[ˆ e(i)]}Q i=1 and the final Inverted MFCC coefR ˆ ficients {Cm }m=1 can be written as,     (Q−1) π 2l − 1 2 X · log[ˆ e(l + 1)] · cos m · Q 2 Q l=0 (8) Due to the inverted filter bank structure, the IMFCC will be able to represent the higher frequency range more finely. Hence it will effectively capture those high frequency formants missed out by the original MFCC. As with MFCC, we used Q = 20, R = 20 and used the last 19 & 10 coefficients to model the individual speakers in separate GMMs and PCs respectively. Cˆm =

4

r

Fusion of speaker models

Two separate models (adopting GMM’s and PC’s methodology) are developed for each speaker to model

MFCC and IMFCC features. Note that, GMM [6] can be viewed as a non-parametric, multivariate probability distribution model that is capable of modeling arbitrary distributions while the PC [7] is a tool used for speaker modeling which utilizes inter-speaker discriminative information. The identification accuracy on both discriminative & nondiscriminative models are shown here that signifies usefulness of the proposed scheme. During test phase, MFCC and IMFCC features are extracted in the same way from incoming speech signal as done in training phase and are sent to respective models. For each speaker, two scores one each from MFCC and IMFCC models are generated. A weighted sum rule [4], [5] i i is adopted to fuse the scores. If SMFCC & SIMFCC are the scores generated by the two models for the ith speaker then i the combined score Scom is expressed as i i i Scom = λSMFCC + (1 − λ)SIMFCC

(9)

where λ = 0.5 is the weight that we have chosen for combination. However, more suitable weights and better combination schemes can be investigated further to get enhanced performance of the combined system. Finally, a speaker is identified by the max rule i.e. the speaker whose models’ combined score is highest among all.

5 5.1

Experimental Evaluation Pre-processing stage

In this work, each frame of speech is pre-processed by i) silence removal and end-point detection using an energy threshold criterion, followed by ii) pre-emphasis with 0.97 pre-emphasis factor, iii) frame blocking with 20ms frame length, i.e N = 160 samples/frame (ref. Sec.2) & 50% overlap, and finally iv) Hamming-windowing. The first coefficients (C0 and Cˆ0 ) from extracted MFCC and IMFCC feature set (ref. Sec.2 & 3 ) are discarded since they contains only the energy of the spectrum and the resulting 19

and 10 dimensional vectors are used for GMM and PC respectively.

No. of utterance correctly identified × 100 Total no. of utterance under test

(10)

5.2

Databases for experiments

5.4

5.2.1

YOHO Database

For each database, we evaluated the performance of an MFCC based classifier, an IMFCC based classifier and a parallel classifier fusing both models.

The YOHO voice verification corpus [1] was collected while testing ITT’s prototype speaker verification system in an office environment. Most subjects were from the New York City area, although there were many exceptions, including some nonnative English speakers. A high-quality telephone handset (Shure XTH-383) was used to collect the speech; however, the speech was not passed through a telephone channel. There are 138 speakers (106 males and 32 females); for each speaker, there are 4 enrollment sessions of 24 utterances each and 10 test sessions of 4 utterances each. In this work, a closed set text-independent speaker identification problem is attempted where we consider all 138 speakers as client speakers. For a speaker, all the 96 (4 sessions × 24 utterances) utterances are used for developing the speaker model while for testing, 40 (10 sessions × 4 utterances) utterances are put under test. Therefore, for 138 speakers we put 138 × 40 = 5520 utterances under test and evaluated the identification accuracies. 5.2.2

Score Calculation

For any closed-set speaker identification problem, identification accuracy is defined as follows in [6] and we have used the same: Percentage of identification accuracy (PIA)

=

Results for YOHO Database

Tables 1 and 2 describe identification results for GMM and PC respectively. The last column in each table depicts the identification accuracies for the proposed combined scheme. The proposed scheme shows significant improvements over MFCC based SI system for all both classifiers over different model orders. Further, even the independent performance of the IMFCC based classifier is comparable to that of the MFCC based classifier. Also, for both the classifiers, identification accuracies increase with increase in model order. Table 1. Results (PIA) of GMM for YOHO No. of Mixtures 2 4 8 16 32 64

POLYCOST Database

The POLYCOST database [8] was recorded as a common initiative within the COST 250 action during JanuaryMarch 1996. It contains around 10 sessions recorded by 134 subjects from 14 countries. Each session consists of 14 items, two of which ( MOT01 & MOT02 files ) contain speech in the subject’s mother tongue. The database was collected through the European telephone network. The recording has been performed with ISDN cards on two XTL SUN platforms with an 8 kHz sampling rate. In this work, a closed set text independent speaker identification problem is addressed where only the mother tongue (MOT) files are used. Specified guideline [8] for conducting closed set speaker identification experiments is adhered to, i.e. ‘MOT02’ files from first four sessions are used to build a speaker model while ‘MOT01’ files from session five onwards are taken for testing. As with YOHO database, all speakers (131 after deletion of three speakers) in the database were registered as clients.

5.3

5.4.1

Experimental Results

MFCC

IFMCC

75.0543 84.9557 91.8659 94.0942 96.1957 97.1920

77.1196 86.3949 92.0652 94.1848 95.1449 95.3080

Combined System 83.0435 90.5616 94.8913 95.7246 97.1920 97.7395

Table 2. Results (PIA) of PC for YOHO Degree of polynomial 2 3

5.4.2

MFCC

IFMCC

81.9384 92.2645

75.0725 86.7572

Combined System 86.8841 94.0038

Results for POLYCOST

Tables 3 and 4 show the identification accuracies for the POLYCOST database. As with the YOHO database, it can be observed from these tables that our proposed combined scheme shows significant improvement over the baseline MFCC based system in all cases. Also, results improve as model order increases. We restrained ourselves to 4 different sized mixtures for GMM. This is because less number of feature vectors are obtained from the POLYCOST

database that prevents development of meaningful higher order GMMs. Since, modeling of the PC based classifier involves computations of a much higher order compared to other techniques, only the first 10 cepstral vectors (lowtime component) containing important speaker-specific vocal tract information [5] are used for modeling.

Table 3. Results (PIA) of GMM for POLYCOST No. of Mixtures 2 4 8 16

MFCC

IFMCC

64.1910 72.4138 76.7905 78.9125

58.4881 67.5066 74.5358 77.3210

Combined System 68.8329 77.5862 79.7082 81.5650

Table 4. Results (PIA) of PC for POLYCOST Degree of polynomial 2 3

MFCC 64.7215 74.1379

IFMCC 57.9576 67.6393

Combined System 68.3024 76.6578

It is observed that the independent performance of IMFCC is not as good as MFCC for POLYCOST database as compared to YOHO. This is due to the fact that the data in POLYCOST is based on telephone speech where higher frequency information used by IMFCC are somewhat distorted. Nevertheless, results show that the complementary information supplied by it helps to improve the performance of MFCC in parallel classifier to a great extent. Results also show that improvement of the proposed scheme is massive in case of lower order models especially in GMM based system for both the databases. Thus it can be said that, compared to a single MFCC based classifier, a speaker can be modeled with the same accuracy but at a comparatively lower order by an MFCC-IMFCC parallel classifier.

6

Conclusion

A new front-end acoustic feature set complementary to MFCC is proposed here that provides higher order speaker specific formant information usually ignored by MFCC. The proposed feature is extracted by flipping the triangular filter bank structure described by MFCC. Speaker models developed from this proposed feature when fused with existing MFCC based speaker models via weighted sum rule, gives significant improvements over the baseline system which can be attributed to availability of complemen-

tary information to two parallel models. The experiment is conducted with two different classifiers over different model orders on two kinds of databases, one based on microphone speech and the other on telephone speech. The results prove the superiority of our proposition irrespective of data type, amount of data, classifier type ( both discriminative and unsupervised ) and model orders. Further, the proposed scheme utilizes the same computational basis as MFCC unlike high level features that needs computationally expensive algorithms for extraction. The processing time could also be compared to a single-stream based system because of the inherent parallelism of the two feature sets. Performance could be further improved by choosing optimal weights to fuse the scores before the classification decision.

References [1] J. P. Cambell, Jr., “Speaker Recognition:A Tutorial”, Proceedings of The IEEE, vol. 85, no. 9, pp. 1437-1462, Sept. 1997. [2] S. B. Davis and P. Mermelstein, “Comparison of Parametric Representation for Monosyllabic Word Recognition in Continuously Spoken Sentences”, IEEE Trans. On ASSP, vol. ASSP 28, no. 4, pp. 357-365, Aug. 1980. [3] Rabiner. L, Juang B. H, Fundamentals of speech recognition, Chap. 2, pp. 11-65, Pearson Education, First Indian Reprint, 2003. [4] Daniel J. Mashao, Marshalleno Skosan, “Combining Classifier Decisions for Robust Speaker Identification”, Pattern Recog,, vol. 39, pp. 147-155, 2006. [5] K. Sri Rama Murty and B. Yegnanarayana, “Combining evidence from residual phase and MFCC features for speaker recognition”, IEEE Signal Processing Letters, vol 13, no. 1, pp. 52-55, Jan. 2006. [6] D. Reynolds, R. Rose, “Robust text-independent speaker identification using gaussian mixture speaker models”, IEEE Trans. Speech Audio Process., vol. 3, no.1, pp. 72-83, Jan. 1995. [7] William M. Campbell, Khaled T. Assaleh, and Charles C. Broun, “Speaker Recognition With Polynomial Classifiers”, IEEE Trans. Speech Audio Process., vol. 10, no.4, pp. 205-212, May. 2002. [8] H. Melin and J. Lindberg. “Guidelines for experiments on the polycost database”, In Proceedings of a COST 250 workshop on Application of Speaker Recognition Techniques in Telephony, pp. 59-69, Vigo, Spain, November 1996.

Capturing Complementary Information via Reversed ...

POLYCOST (telephone speech) with two different classifier paradigms .... bT.a. ( )m. C. ( )m. ˆC. Figure 3. Figure showing extraction of MFCC and IMFCC ...

118KB Sizes 0 Downloads 208 Views

Recommend Documents

Arthritis Reversed - SLIDEBLAST.COM
7 How Stress and Emotions Affect Arthritis. 63 .... Eastern and Western methods, from bodywork to energy medicine to pain .... In fact, there are over 100 different types of arthritis; the three ...... clean out a joint will relieve arthritis pain is

Panorama: Capturing System-wide Information Flow for Malware ...
Keywords. Malware Detection, Malware Analysis, Dynamic Taint Anal- ysis, Spyware ... tainted data, how the data propagates through the system, and finally, to ...

ACCESSING STUDENT INFORMATION VIA HOME ACCESS ...
Page 1 of 1. ACCESSING STUDENT INFORMATION VIA HOME ACCESS CENTER (HAC). Parents & Students,. The Home Access Center (HAC) to view student school information over the Internet has been. upgraded. To access HAC, go to http://hac31.eschoolplus.k12.ar.u

Information Sharing via The Aquatic Commons
its way into commercial journals. The results of research and the ... on the EPrints open access software created at the University of Southampton (UK) and is.

Panorama: Capturing System-wide Information Flow for Malware ...
ware may attempt to hook library or system call interfaces that the detector does not ... any visible registry entries or library or system call inter- faces. In this paper ...... Research Grant No. W911NF-06-1-0316, and ... tria competence center. 1

086JCR-INTEGRATIVE & COMPLEMENTARY MEDICINE.pdf ...
086JCR-INTEGRATIVE & COMPLEMENTARY MEDICINE.pdf. 086JCR-INTEGRATIVE & COMPLEMENTARY MEDICINE.pdf. Open. Extract. Open with. Sign In.

Complementary Projection Hashing - CiteSeerX
Given a data set X ∈ Rd×n containing n d-dimensional points ..... data set is publicly available 3 and has been used in [7, 25, 15]. ..... ing for large scale search.

Private Location-Based Information Retrieval via k ...
Abstract We present a multidisciplinary solution to an application of private re- trieval of ..... density functions (PDF) and probability mass functions (PMF) are denoted by p and ..... an arbitrary clustering of a large cloud of points. This is ...

Robust Feature Extraction via Information Theoretic ...
Jun 17, 2009 - Training Data. ▫ Goal:Search for a ... Label outliers: mislabeling of training data. ▫ Robust feature ... Performance comparison on MNIST set ...

Capturing Architectural Requirements.pdf
PDF-XCHANGE ... The database will be Oracle 8i. ... requirements pertaining to order processing or stock control, for example. ... The "+" in the FURPS+ acronym is used to identify additional categories that generally represent constraints.

capturing multisensory spatial attention - NeuroBiography
closed-circuit television camera and monitor was installed in front of the ...... (1997) used neutral objects (e.g., ping pong balls or a square of cardboard), ...

Private Location-Based Information Retrieval via k ...
based on Cartesian coordinates, graphs, multiple client-server interactions[Duckham 05 ... Other TTP-free methods build upon cryptographic methods for PIR,. which may be .... an arbitrary clustering of a large cloud of points. This is ...

Private Location-Based Information Retrieval via k ...
ID, Query,. Location. IDTTP, Query,. Location. Reply. TTP. Reply. LBS Provider. User. Fig. 1: Anonymous access to an LBS provider through a TTP. sense that the provider cannot know the user ID, but merely the identity IDTTP of the TTP itself inherent

Private Location-Based Information Retrieval via k ...
Abstract We present a multidisciplinary solution to an application of private re- trieval of ..... to this work build upon the idea of location anonymizers, that is, TTPs implementing location ..... an arbitrary clustering of a large cloud of points.

Information Fusion in Navigation Systems via Factor Graph Based ...
Jan 5, 2013 - (pdf) of all states, a computationally-expensive process in the general ..... dition, in order to avoid re-calculating ∆xi→j, the effect of re-linearization ... obtained every 50 IMU measurements and bias nodes are added at the same

Robust Feature Extraction via Information Theoretic ...
function related to the Renyi's entropy of the data fea- tures and the Renyi's .... ties, e.g., highest fixed design breakdown point (Miz- era & Muller, 1999). Problem ...

Comparing Alternatives for Capturing Dynamic ...
the application domain, like airplanes or automobiles or even unlabeled moving ... then modeling and/or tracking moving objects share the common drawback of ..... LIBSVM: a library for support vector machines, 2001, software available at.

pdf-174\auriculotherapy-complementary-medicine-thieme ...
Try one of the apps below to open or edit this item. pdf-174\auriculotherapy-complementary-medicine-thieme-paperback-by-raphael-nogier.pdf.

Report Complementary Systems for Understanding Action Intentions
Mar 25, 2008 - C. Donders Centre for Cognitive Neuroimaging ..... was controlled with Presentation software (Neurobehavioral ... science, London, UK).

Complementary approaches to the developmental ...
In principle, two hypothetical individuals can receive ... which its development could go awry, strong evi- ... particular test, my colleagues and I now specifically.