
Information Retrieval Evaluation
Author(s): Donna Harman (Author), Gary Marchionini (Series Editor)
- Publisher: Morgan & Claypool Publishers
- Publication Date: 3 Jun. 2011
- Language: English
- Print length: 120 pages
- ISBN-10: 9781598299717
- ISBN-13: 1598299719
Book Description
The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster “user” study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. The second chapter covers the more recent “batch” evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were modified and how the metrics were changed to better reflect operational environments. The final chapters look at evaluation issues in user studies — the interactive part of information retrieval, including a look at the search log studies mainly done by the commercial search engines. Here the goal is to show, via case studies, how the high-level issues of experimental design affect the final evaluations.
Table of Contents: Introduction and Early History / “Batch” Evaluation Since 1992 / Interactive Evaluation / Conclusion
Wow! eBook

