开发者

How large must a corpus be to create a language model for Sphinx?

开发者 https://www.devze.com 2023-03-20 06:55 出处:网络
I would like to know how many documents or sentences or words I need to process in order to get a good language model of a domain and use it in voice recognition tools suc开发者_开发百科h as CMU Sphin

I would like to know how many documents or sentences or words I need to process in order to get a good language model of a domain and use it in voice recognition tools suc开发者_开发百科h as CMU Sphinx.


To create a decent language model for a small domain it's usually enough to have about 100 mb of texts. You can mix them with a generic language model to get a better generalization of the language model.

To create a generic language model developers use very big corpora. For example there is a Google 1TB corpus which contains millions of words and terabyte of data. The trigram part of it is about 40Gb of bigram counts but it must be a hundred terabytes of texts.


adding to Nikolay's answer:

This is not a trivial task. Generating a language model is a time- and resource-intensive task.

If you want to have a "good" language model, you will need a large or very large text corpus to train a language model (think in the order of magnitude of several years of wall street journal texts).

"good" means: if the language model will be able to generalize from the training data to new and previously unseen input data

You should look at the documentation of the Sphinx and the HTK language model toolkits.

Please check these two threads:

Building openears compatible language model

Ruby Text Analysis

You could take a more general Language Model, based on a bigger corpus and interpolate your smaller Language Model with it .. e.g a back-off language model ... but that's not a trivial task.

see: http://en.wikipedia.org/wiki/Katz's_back-off_model

0

精彩评论

暂无评论...
验证码 换一张
取 消