/

Learning Probabilistic Models of Word Sense Disambiguation

Ted Pedersen
Arxiv ID: 707.3972Last updated: 9/29/2009
This dissertation presents several new methods of supervised and unsupervised learning of word sense disambiguation models. The supervised methods focus on performing model searches through a space of probabilistic models, and the unsupervised methods rely on the use of Gibbs Sampling and the Expectation Maximization (EM) algorithm. In both the supervised and unsupervised case, the Naive Bayesian model is found to perform well. An explanation for this success is presented in terms of learning rates and bias-variance decompositions.

PaperStudio AI Chat

I'm your research assistant! Ask me anything about this paper.

Related papers

About
Pricing
Commercial Disclosure
Contact
© 2023 Paper Studio™. All Rights Reserved.