Hits:
Indexed by:期刊论文
Date of Publication:2013-04-01
Journal:JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY
Included Journals:SCIE、EI、SSCI、Scopus
Volume:64
Issue:4
Page Number:818-828
ISSN No.:1532-2882
Key Words:machine learning; information retrieval; searching
Abstract:The central issue in language model estimation is smoothing, which is a technique for avoiding zero probability estimation problem and overcoming data sparsity. There are three representative smoothing methods: Jelinek-Mercer (JM) method; Bayesian smoothing using Dirichlet priors (Dir) method; and absolute discounting (Dis) method, whose parameters are usually estimated empirically. Previous research in information retrieval (IR) on smoothing parameter estimation tends to select a single value from optional values for the collection, but it may not be appropriate for all the queries. The effectiveness of all the optional values should be considered to improve the ranking performance. Recently, learning to rank has become an effective approach to optimize the ranking accuracy by merging the existing retrieval methods. In this article, the smoothing methods for language modeling in information retrieval (LMIR) with different parameters are treated as different retrieval methods, then a learning to rank approach to learn a ranking model based on the features extracted by smoothing methods is presented. In the process of learning, the effectiveness of all the optional smoothing parameters is taken into account for all queries. The experimental results on the Learning to Rank for Information Retrieval (LETOR) LETOR3.0 and LETOR4.0 data sets show that our approach is effective in improving the performance of LMIR.