Automatic task slots assignment in Hadoop MapReduce
Kun Wang, Juwei Shi, et al.
PACT 2011
In this paper we define two alternatives to the familiar perplexity statistic (hereafter lexical perplexity), which is widely applied both as a figure of merit and as an objective function for training language models. These alternatives, respectively acoustic perplexity and the synthetic acoustic word error rate, fuse information from both the language model and the acoustic model. We show how to compute these statistics by effectively synthesizing a large acoustic corpus, demonstrate their superiority (on a modest collection of models and test sets) to lexical perplexity as predictors of language model performance, and investigate their use as objective functions for training language models. We develop an efficient algorithm for training such models, and present results from a simple speech recognition experiment, in which we achieved a small reduction in word error rate by interpolating a language model trained by synthetic acoustic word error rate with a unigram model.
Kun Wang, Juwei Shi, et al.
PACT 2011
Alistair Sutcliffe, John Carroll, et al.
CHI 1991
David G. Novick, John Karat, et al.
CHI EA 1997
Rajesh Balchandran, Leonid Rachevsky, et al.
INTERSPEECH 2009