QUERY LANGUAGE MODELING FOR VOICE SEARCH C. Chelba, J. Schalkwyk, T. Brants, V. Ha, B. Harb, W. Neveitt, C. Parada∗ , P. Xu Google, Inc., 1600 Amphiteatre Pkwy, Mountain View, CA 94043, USA ABSTRACT The paper presents an empirical exploration of google.com query stream language modeling. We describe the normalization of the typed query stream resulting in out-of-vocabulary (OoV) rates below 1% for a one million word vocabulary. We present a comprehensive set of experiments that guided the design decisions for a voice search service. In the process we re-discovered a less known interaction between Kneser-Ney smoothing and entropy pruning, and found empirical evidence that hints at non-stationarity of the query stream, as well as strong dependence on various English locales—USA, Britain and Australia. Index Terms— language modeling, voice search, query stream 1. INTRODUCTION A typical voice search language model used in our system for the US English query stream is trained as follows: • vocabulary size: 1M words, OoV rate 0.57% • training data: 230B words, a random sample of anonymized queries that did not trigger spelling correction The resulting size, as well as its performance on unseen query data (10k queries) when using Katz smoothing is shown in Table 1. We note a few key aspects: • the first pass LM (15 million n-grams) requires very aggressive pruning—to about 0.1% of its unpruned size—in order to make it usable in static FST-based ASR decoders • the perplexity hit taken by pruning the LM is significant, 50% relative; similarly, the 3-gram hit ratio is halved • the impact on WER due to pruning is significant, yet lower in relative terms—10% relative, as we show in Section 6 • the unpruned model has excellent n-gram hit ratios on unseen test data: 77% for n = 5, and 97% for n = 3 • the choice of n = 5 is because using higher n-gram orders yields diminishing returns: a 7-gram LM is four times larger than the 5-gram LM trained from the same data and using the same vocabulary, at no gain in perplexity. The paper attempts to explain our design choices. The next section describes the text normalization that allows us to use a one million word vocabulary and obtain out-of-vocabulary (OoV) rates lower than 1%, as well as the excellent n-gram hit ratios presented in Table 1. ∗ The author performed the work as a summer intern at Google, NYC. Her affiliation is with the CLSP, The Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA

The use of Katz smoothing in our language model is not accidental: we examined the interaction between Stolcke pruning and various n-gram LM smoothing techniques for aggressive pruning regimes which cut the original LM to under 1% of its original size. The main finding is that the increasingly popular family of KneserNey [1] smoothing methods is in fact poorly suited for such aggressive pruning regimes, as explained in Section 3. When evaluated in terms of both perplexity and ASR word error rate, the more traditional ones, e.g. Katz/Good-Turing [2] perform significantly better after pruning. We wish to emphasize that this is not a new result, [3] also pointed out this behavior of Kneser-Ney models 1 and proposed a solution that alleviates the problem by growing the LM, instead of pruning it. [4] also suggests a variation of Kneser-Ney smoothing more suitable for pruning. We then present experiments that show the temporal and spatial dependence of the English language models. Somewhat unexpectedly, using more training data does not result in an improved language model despite the fact that it is extremely well matched to the unseen test data. The English language models built from training data originating in three locales (USA, Britain, and Australia) exhibit strong locale-specific behavior, both in terms of perplexity and OoV rate. The last section presents speech recognition experiments on a voice search test set. We conclude by highlighting our main findings. 2. TEXT NORMALIZATION In order to build a language model for spoken query recognition we boot-strap from written queries to google.com. Written queries provide a data rich environment for modeling of queries. This requires robustly transforming written text into spoken form. Table 2 lists a couple of example queries and their corresponding spoken equivalents. Written queries contain a fair number of cases which require special attention to convert to spoken form. Analyzing the top million vocabulary items before text normalization we see approximately 20% URLs and 20+% numeric items in the query stream. Without careful attention to text normalization the vocabulary of the system will grow substantially. We adopt a finite state approach to text normalization. Let T (written) be an acceptor that represents the written query. Conceptually the spoken form is computed as follows T (spoken) = bestpath(T (written) ◦ N (spoken)) where N (spoken) represents the transduction from written to spoken form. Note that composition with N (spoken) might introduce 1 Alas, we were unaware of this at the time of our experiments and rediscovered this on our own.

Order 3 3 5

no. n-grams 15M 7.7B 12.7B

pruning entropy (Stolcke) none cut-off (1-1-2-2-2)

PPL 190 132 108

n-gram hit-ratios 47/93/100 97/99/100 77/88/97/99/100

Table 1. Typical voice search LM, Katz smoothing: the LM is trained on 230 billion words using a vocabulary of 1 million words, achieving out-of-vocabulary rate of 0.57% on test data. Written Query weather scarsdale, ny bankofamerica.com 81 walker rd 10:30am at&t espn

Spoken Query weather scarsdale new york weather in scarsdale new york bank of america dot com eighty one walker rd ten thirty A M A T and T ESPN

Table 2. Example written queries and their corresponding spoken form.

Time, Date, Number Normalizer $20 books on amazon.com

Annotator

Location Normalizer

twenty dollar books on amazon dot com

URL Normalizer

in this table, and let U be the set of substrings of these queries that are labeled URLs. For a string s of length k let I(s) be the transducer that maps each character in s to itself; i.e., the i-th transition in I(s) has input and output label s(i). I(s) represents the word segmented into characters. Further, let T (s) be the transducer that maps the sequence of characters in s to s; i.e., the first transition in T (s) has input s(1) and output s, and the i-th transition, where i 6= 1, has input s(i) and output . T (s) represents the transduction of the spelled form of the word to the word itself. For a set of strings S, we define M T (S) = T (s) s∈S

Normalizers

Fig. 1. Block diagram for context aware text normalization.

multiple alternate spoken representations of the input text. For the purpose of computing n-grams for spoken language modeling of queries we use the bestpath operation to select a single most likely interpretation. The text normalization is run in multiple phases. Figure 1 depicts the text normalization process. In the first step we annotate the data. In this phase we categorize parts (sub strings) of queries into a set of known categories (e.g time, date, url, location). Since the query is annotated, it is possible to perform contextaware normalization on the substrings. Each category has a corresponding text normalization transducer Ncat (spoken) that is used to normalize the substring. Depending on the category we either use rule based approaches or a statistical approach to construct the text normalization transducer. For numeric categories like date, time and numbers it is easy enough to describe N (spoken) using context dependent rewrite rules. For the URL normalizer Nurl (spoken) we train a statistical word decompounder that segments the string into its word constituents. For example, one reads the URL cancercentersofamerica.com as “cancer centers of america dot com”. The URL decompounding transducer (decompounder) is built from the annotated data. Let Q be the set of queries

L

where is the union operation on transducers. T (S) therefore represents the transduction of the spelling of the word to the word itself for the whole vocabulary. Figure 2 illustrates the operation of T (·). The queries in Q and their frequencies are used to train an LM LBASE . Let VBASE be its vocabulary. We build the decompounder as follows: 1. For each u ∈ U , define N (u) as, N (u) = bestpath (I(u) ◦ T ∗ (VBASE ) ◦ LBASE )

(1)



where ‘ ’ is the Kleene Closure, and ‘◦’ is the composition operator. M 2. N (U ) = N (u) is the URL decompounder. u∈U

The transducer I(u) ◦ T ∗ (VBASE ) in (1) represents the lattice of all possible segmentations of u using the words in VBASE , where each path from the start state to a final state in the transducer is a valid segmentation. The composition with the LM LBASE scores every path. Finally, N (u) is the path with the highest probability; i.e. the most likely segmentation. As an example, Figure 3 depicts I(u) ◦ T ∗ (VBASE ) for u = myspacelayouts. Each path in this lattice is a valid decomposition, and in Table 3 we list a sample of these paths. After scoring all the paths via the composition with LBASE , we choose the best path to represent the spoken form of the URL.

m:myspace

y:eps

1

s:eps

3

p:eps

4

p:eps

9

s:space

a:eps

5

c:eps

6

e:eps

y:eps

8

m:my

0

2

a:eps

10

c:eps

11

12

e:eps

u:eps

t:eps

s:eps

l:layouts a:eps

13

14

y:eps

15

o:eps

16

l:lay

17

18 y:eps

a:eps

19

7

20

o:outs u:eps

21

t:eps

22

s:eps

23

Fig. 2. T (S) for the set of words S = {my, space, myspace, lay, outs, layouts} where ‘eps’ denotes .

my 0

1

space

lay

s,p,a,c,e

m,y myspace

2

3

o,u,t,s

l,a,y layouts

Perplexity Increase with Pruned LM Size

outs

8.4 Katz (Good−Turing) Kneser−Ney Interpolated Kneser−Ney

4 8.2

8



Fig. 3. The lattice I(u) ◦ T (VBASE ) of all possible segmentations for u = myspacelayouts using words in VBASE .

Table 3. Sample segmentations from Fig. 3. The one in bold represents the highest probability path as determined by the composition with LBASE .

3. PRUNING INTERACTION WITH SMOOTHING We examined the interaction between Stolcke pruning [5] and various n-gram LM smoothing techniques for aggressive pruning regimes which cut the original LM to under 1% of its original size. The main finding is that the increasingly popular family of KneserNey [1] smoothing methods is in fact poorly suited for such aggressive pruning regimes. Seymore-Rosenfeld pruning [6] is an alternative to Stolcke pruning that relies on the relative frequency in the training data for a given context f (h) instead of the probability P (h) computed from lower order estimates. For Kneser-Ney models this eliminates one source of potential problems in pruning: since the P (h) calculation involves only lower order n-gram estimates it will use the diversity based estimates, which are quite different from the relative frequency ones. Figure 4 shows the the change in perplexity with the number of n-grams in the entropy pruned model. We experimented with 4-gram models built on Broadcast News data, as reported in [7]. Although the unpruned Kneser-Ney models start from a slightly lower perplexity than the Katz model, they degrade faster with pruning. In experiments for voice search we observed large relative differences in perplexity between Kneser-Ney/Katz models—after prun-

PPL (log2)

Possible Segmentations myspace layouts my space layouts my space lay outs my space l a y outs

7.8

7.6

7.4

7.2

7

6.8 18

19

20 21 22 23 Model Size in Number of N−grams (log2)

24

25

Fig. 4. Stolcke pruned 4-gram model perplexity as a function of model size (no. n-grams) for Katz, Kneser-Ney and Interpolated Kneser-Ney models as implemented by the SRILM toolkit. The Interpolated Kneser-Ney model is estimated by turning the -interpolate option on in the SRILM toolkit.

ing them to 0.1% of their original size. Aggressive Stolcke pruning for a Kneser-Ney 4-gram model can lead to a relative increase in perplexity that is twice as large as for the other smoothing techniques evaluated—135% vs. 65% relative increase. The differences were also found to impact speech recognition accuracy significantly, approximately 10% relative. Since the difference between Katz and Kneser-Ney is very small on unpruned models and significant on pruned models we chose to use Katz smoothing when building a LM for voice search. A thorough analysis on the interaction between LM smoothing and aggressive pruning in this context is presented in [7].

Training Set 230B BIG

Test Set PPL Unpruned Pruned 121 205 132 209

Table 4. Pruned and unpruned 3-gram language model perplexity when trained on the most recent 230 billion words, and a much larger amount of training data prior to test data, respectively.

4. QUERY STREAM NON-STATIONARITY Our first attempt at improving the language model was to use more training data: we used a significantly larger amount of training data (BIG) vs. the most recent 230 billion (230B) prior to September 2008. The 230B corpus is the most recent subset of BIG. As test data we used a random sample consisting of 10k queries from Sept-Dec 2008. The first somewhat surprising finding was that this had very little impact in OoV rate for 1M word vocabulary: 0.77% (230B vocabulary) vs. 0.73% (BIG vocabulary). Perhaps even more surprising however is the fact that the significantly larger training set did not yield a better language model, despite the training data being clearly well matched, as illustrated in Table 4. In fact, we observed a significant reduction in PPL (10%) when using the more recent 230B data. Pruning masks this effect, and the differences in PPL and WER become insignificant after reducing the language model size to approximately 10 million 3-grams. Since the vocabulary, and training data set change between the two rows, the PPL differences need to be analyzed in a more careful experimental setup. A superficial interpretation of the results seems to contradict the “there’s no data like more data” dictum, recently reiterated in a somewhat stronger form in [8], [9] and [10]. Our experience has been that supply of “more data” needs to be matched with increased demand on the modeling side, usually by increasing the model capacity—typically achieved by estimating more parameters. Experiments reported in Section 6 improve performance by keeping the amount of training data constant (albeit very large), and increasing the n-gram model size by adding more n-grams at fixed n, as well as increasing the model order n. As such, it may well be the case that the increase in PPL for the BIG model is in fact due to limited capacity in the 3-gram model. More investigation is needed to disentangle the effects of query stream non-stationarity from possible mismatched model capacity issues. A complete set of experiments needs to: • let the n-gram order grow as large as the data allows; • build a sequence of models trained on exactly the same amount of data obtained by sliding a time-window of varying length over the query stream, and control for the ensuing vocabulary mismatches. 5. LOCALE MATTERS We also built locale specific English language models using training data prior to September 2008 across 3 English locales: USA (USA), Britain (GBR, about a quarter of the USA amount) and Australia (AUS, about a quarter of the GBR amount). The test data consisted as before of 10k queries for each locale sampled randomly from Sept-Dec 2008.

Tables 5, 6, 7 show the results. The dependence on locale is surprisingly strong: using an LM on out-of-locale test data doubles the OoV rate and perplexity, either pruned or unpruned. Training Locale USA GBR AUS

USA 0.7 1.3 1.3

Test Locale GBR AUS 1.3 1.6 0.7 1.3 1.1 0.7

Table 5. Out of Vocabulary Rate: locale specific vocabulary halves the OoV rate

Training Locale USA GBR AUS

USA 132 260 276

Test Locale GBR AUS 234 251 110 224 210 124

Table 6. Perplexity of unpruned LM: locale specific LM halves the PPL of the unpruned LM We have also build a combined model by pooling data across locales, with the results shown on the last row of Table 7. Combining the data negatively impacts all locales, in particular the ones with less data. The farther the locale from USA (as seen on the first line, GBR is closer to USA than AUS), the more negative the impact of clumping all the data together, relative to using only the data from that given locale. Training Locale USA GBR AUS combined

USA 210 442 422 227

Test Locale GBR AUS 369 412 150 342 293 171 210 271

Table 7. Perplexity of pruned LM: locale specific LM halves the PPL of the unpruned LM. Pooling all data is suboptimal.

6. EFFECT OF LANGUAGE MODEL SIZE ON SPEECH RECOGNITION ACCURACY The work described in [11] and [12] enables us to evaluate relatively large query language models in the 1-st pass of our ASR decoder by representing the language model in the OpenFst [13] framework. Figures 5-6 show the PPL and word error rate (WER) for two language models (3-gram and 5-gram, respectively) built on the 230B training data, after entropy pruning to various sizes in the range 15 million - 1.5 billion n-grams. Perplexity is evaluated on the test set described in Section 4; word error rate is measured on another test set representative for the voice search task. As can be seen, perplexity is very well correlated with WER, and the size of the language model has a significant impact on speech recognition accuracy: increasing the model size by two orders of magnitude reduces the WER by 10% relative.

Perplexity (left) and Word Error Rate (right) as a function of LM size

Perplexity (left) and WER (right) as a function of 5−gram LM size

260

20.5

240

20

220

19.5

200

19

180

18.5

160

18

140

17.5

120 −3 10

−2

10

−1

10 LM size: # n−grams(B, log scale)

0

10

17 1 10

200

19

180

18.5

160

18

140

17.5

120

17

100 −2 10

−1

0

10

10

16.5 1 10

LM size: # 5−grams(B)

Fig. 5. 3-gram language model perplexity and word error rate as a function of language model size; lower curve is PPL.

Fig. 6. 5-gram language model perplexity and word error rate as a function of language model size; lower curve is PPL.

We have also implemented lattice rescoring using the distributed language model architecture described in [14], see the results presented in Table 8. This enables us to validate empirically the fact that rescoring lattices generated with a relatively small 1-st pass language model (in this case 15 million 3-gram, denoted 15M 3-gram in Table 8) yields the same results as 1-st pass decoding with a large language model. A secondary benefit of the lattice rescoring setup is that one can evaluate the ASR performance of much larger language models.

to separate the stationary/non-stationary components in the query stream. Less surprisingly, we have shown that locale matters significantly for English query data across USA, Great Britain and Australia. As a concluding remark, we generally see excellent correlation of WER with PPL under various pruning regimes, as long as the training set and vocabulary stays constant. 8. ACKNOWLEDGMENTS

Pass 1st 1st 2nd 2nd 2nd

Language Model 15M 3-gram 1.6B 5-gram 15M 3-gram 1.6B 5-gram 12.7B 5-gram

PPL 191 112 191 112 108

WER 18.7 16.9 18.8 16.9 16.8

Table 8. Speech recognition language model performance when used in the 1-st pass or in the 2-nd pass—lattice rescoring.

7. CONCLUSIONS Our experiments show that with careful text normalization the query stream is not as “wild” as it seems at first sight. One can achieve excellent OoV rates for a one million word vocabulary, and n-gram hit ratios of 77/88% even at n = 5/4, respectively. We have confirmed in a different experimental setup the less known fact that aggressive entropy pruning (in particular Stolcke pruning) significantly degrades language models built using KneserNey smoothing, whereas Katz smoothing performs much better. Experimental evidence suggests that the query stream is nonstationary, and that more data does not automatically imply better models even when the data is clearly matched to the test data. More careful experiments are needed to adjust model capacity and identify an optimal way of blending older and recent data—attempting

Thanks Mark Paskin for providing the spelling correction data, and Zhongli Ding for test set query selection. 9. REFERENCES [1] R. Kneser and H. Ney, “Improved backing-off for m-gram language modeling,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 1995, vol. 1, pp. 181–184. [2] S. Katz, “Estimation of probabilities from sparse data for the language model component of a speech recognizer,” in IEEE Transactions on Acoustics, Speech and Signal Processing, March 1987, vol. 35, pp. 400–01. [3] V. Siivola, T. Hirsimaki, and S. Virpioja, “On Growing and Pruning Kneser–Ney Smoothed N -Gram Models,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 5, pp. 1617–1624, 2007. [4] R. Kneser, “Statistical language modeling using a variable context length,” in Proc. ICSLP ’96, Philadelphia, PA, 1996, vol. 1, pp. 494–497. [5] Andreas Stolcke, “Entropy-based pruning of back-off language models,” in Proceedings of News Transcription and Understanding Workshop, Lansdowne, VA, 1998, pp. 270–274, DARPA.

[6] K. Seymore and R. Rosenfeld, “Scalable back-off language models,” in Proceedings ICSLP, Philadelphia, 1996, vol. 1, pp. 232–235. [7] C. Chelba, T. Brants, W. Neveitt, and P. Xu, “Study on interaction between entropy pruning and Kneser-Ney smoothing,” in Proceedings of Interspeech, p. to appear. Makuhari, Japan, 2010. [8] M. Banko and E. Brill, “Mitigating the paucity-of-data problem: Exploring the effect of training corpus size on classifier performance for natural language processing,” in Proceedings of the first international conference on Human language technology research. Association for Computational Linguistics Morristown, NJ, USA, 2001, pp. 1–5. [9] F.J. Och, “Statistical machine translation: Foundations and recent advances,” in Presentation at MT-Summit, 2005. [10] A. Halevy, P. Norvig, and F. Pereira, “The unreasonable effectiveness of data,” IEEE Intelligent Systems, vol. 24, no. 2, pp. 8–12, 2009. [11] B. Harb, C. Chelba, J. Dean, and S. Ghemawat, “Back-off language model compression,” in Proceedings of Interspeech, pp. 325–355. Brighton, UK, 2009. [12] Cyril Allauzen, Johan Schalkwyk, and Michael Riley, “A generalized composition algorithm for weighted finite-state transducers,” in Proc. Interspeech, 2009, pp. 1203–1206. [13] Cyril Allauzen, Michael Riley, Johan Schalkwyk, Wojciech Skut, and Mehryar Mohri, “OpenFst: A general and efficient weighted finite-state transducer library,” in Proceedings of the Ninth International Conference on Implementation and Application of Automata, (CIAA 2007). 2007, vol. 4783 of Lecture Notes in Computer Science, pp. 11–23, Springer, http://www.openfst.org. [14] T. Brants, A. C. Popat, P. Xu, F. J. Och, and J. Dean, “Large language models in machine translation,” in Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), 2007, pp. 858–867.

QUERY LANGUAGE MODELING FOR VOICE ... - Research at Google

ABSTRACT ... data (10k queries) when using Katz smoothing is shown in Table 1. ..... well be the case that the increase in PPL for the BIG model is in fact.

120KB Sizes 2 Downloads 510 Views

Recommend Documents

EXPLORING LANGUAGE MODELING ... - Research at Google
ended up getting less city-specific data in their models. The city-specific system also includes a semantic stage for inverse text normalization. This stage maps the query variants like “comp usa” and ”comp u s a,” to the most common web- tex

Written-Domain Language Modeling for ... - Research at Google
Language modeling for automatic speech recognition (ASR) systems has been traditionally in the verbal domain. In this paper, we present finite-state modeling ...

Voice Search for Development - Research at Google
26-30 September 2010, Makuhari, Chiba, Japan. INTERSPEECH ... phone calls are famously inexpensive, but this is not true in most developing countries.).

Sparse Non-negative Matrix Language Modeling - Research at Google
test data: 1.98% (out of domain, as it turns out). ○ OOV words mapped to , also part of ... Computationally cheap: ○ O(counting relative frequencies).

n-gram language modeling using recurrent ... - Research at Google
vocabulary; a good language model should of course have lower perplexity, and thus the ..... 387X. URL https://transacl.org/ojs/index.php/tacl/article/view/561.

Sparse Non-negative Matrix Language Modeling - Research at Google
Table 4 that the best RNNME model outperforms the best SNM model by 13% on the check set. The out- of-domain test set shows that due to its compactness,.

Large Scale Language Modeling in Automatic ... - Research at Google
The test set was gathered using an Android application. People were prompted to speak a set of random google.com queries selected from a time period that ...

Geo-location for Voice Search Language Modeling - Semantic Scholar
guage model: we make use of query logs annotated with geo- location information .... million words; the root LM is a Katz [10] 5-gram trained on about 695 billion ... in the left-most bin, with the smallest amounts of data and LMs, either before of .

Sparse Non-negative Matrix Language Modeling - Research at Google
same speech recognition accuracy on voice search and short message ..... a second training stage that adapts the model to in-domain tran- scribed data. 5.

Discriminative pronunciation modeling for ... - Research at Google
clinicians and educators employ it for automated assessment .... We call this new phone sequence ..... Arlington, VA: Center for Applied Linguistics, 1969.

Query-Free News Search - Research at Google
Keywords. Web information retrieval, query-free search ..... algorithm would be able to achieve 100% relative recall. ..... Domain-specific keyphrase extraction. In.

DISCRIMINATIVE FEATURES FOR LANGUAGE ... - Research at Google
language recognition system. We train the ... lar approach to language recognition has been the MAP-SVM method [1] [2] ... turned into a linear classifier computing score dl(u) for utter- ance u in ... the error rate on a development set. The first .

Query Suggestions for Mobile Search ... - Research at Google
Apr 10, 2008 - suggestions in order to provide UI guidelines for mobile text prediction ... If the user mis-entered a query, the application would display an error ..... Hart, S.G., Staveland, L.E. Development of NASA-TLX Results of empirical and ...

The SMAPH System for Query Entity ... - Research at Google
Jul 6, 2014 - sifier is eventually used to make a final decision on whether to add an .... This way, the disambiguation is informed with a richer context and a ...

A Social Query Model for Decentralized Search - Research at Google
Aug 24, 2008 - social search as well as peer-to-peer networks [17, 18, 1]. ...... a P2P service, where the greedy key-based routing will be replaced by the ...

Query Difficulty Prediction for Contextual Image ... - Research at Google
seen a picture of it when reading an article that contains the word panda. Although this idea sounds .... Educational Psychology Review, 14(1):5–26, March 2002. 3. Z. Le. Maximum ... Proc. of Workshop on Human. Language Technology, 1994.

Natural Language Processing Research - Research at Google
Used numerous well known systems techniques. • MapReduce for scalability. • Multiple cores and threads per computer for efficiency. • GFS to store lots of data.

Google Search by Voice - Research at Google
May 2, 2011 - 1.5. 6.2. 64. 1.8. 4.6. 256. 3.0. 4.6. CompressedArray. 8. 2.3. 5.0. 64. 5.6. 3.2. 256 16.4. 3.1 .... app phones (Android, iPhone) do high quality.

Google Search by Voice - Research at Google
Feb 3, 2012 - 02/03/2012 Ciprian Chelba et al., Voice Search Language Modeling – p. 1 ..... app phones (Android, iPhone) do high quality speech capture.

Google Search by Voice - Research at Google
Kim et al., “Recent advances in broadcast news transcription,” in IEEE. Workshop on Automatic ... M-phones (including back-off) in an N-best list .... Technology.

recurrent neural networks for voice activity ... - Research at Google
28th International Conference on Machine Learning. (ICML), 2011. [7] R. Gemello, F. Mana, and R. De Mori, “Non-linear es- timation of voice activity to improve ...

DISTRIBUTED ACOUSTIC MODELING WITH ... - Research at Google
best rescoring framework for Google Voice Search. 87,000 hours of training .... serving system (SSTable service) with S servers each holding. 1/S-th of the data.

Bayesian Methods for Media Mix Modeling with ... - Research at Google
Apr 14, 2017 - To model the shape effect of advertising, the media spend needs to be ..... Figure 3: Illustration of ROAS calculation for one media channel taking into account ..... 50. 75. Week. Media Spend media media.1 media.2 media.3.

Modeling Method and Design Optimization for a ... - Research at Google
driving vehicle, big data analysis, and the internet of things (IoT), .... 3) Reverse Recovery of SR ... two topologies suffer from hard switching, resulting in higher.