Results 241 to 250 of about 45,366 (262)
Some of the next articles are maybe not open access.
Proceedings of the tenth international conference on Information and knowledge management - CIKM'01, 2001
Traditional text retrieval systems return a ranked list of documents in response to a user's request. While a ranked list of documents can be an appropriate response for the user, frequently it is not. Usually it would be better for the system to provide the answer itself instead of requiring the user to search for the answer in a set of documents. The
openaire +1 more source
Traditional text retrieval systems return a ranked list of documents in response to a user's request. While a ranked list of documents can be an appropriate response for the user, frequently it is not. Usually it would be better for the system to provide the answer itself instead of requiring the user to search for the answer in a set of documents. The
openaire +1 more source
1994
This paper discusses the Text REtrieval Conferences (TREC) programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within it in terms of the approaches to system performance factors these represent; analyses the test results ...
openaire +1 more source
This paper discusses the Text REtrieval Conferences (TREC) programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within it in terms of the approaches to system performance factors these represent; analyses the test results ...
openaire +1 more source
2013
TREC-style evaluation is generally considered to be the use of test collections, an evaluation methodology referred to as the Cranfield paradigm. This paper starts with a short description of the original Cranfield experiment, with the emphasis on the how and why of the Cranfield framework. This framework is then updated to cover the more recent "batch"
openaire +1 more source
TREC-style evaluation is generally considered to be the use of test collections, an evaluation methodology referred to as the Cranfield paradigm. This paper starts with a short description of the original Cranfield experiment, with the emphasis on the how and why of the Cranfield framework. This framework is then updated to cover the more recent "batch"
openaire +1 more source
The TREC robust retrieval track
ACM SIGIR Forum, 2005The robust retrieval track explores methods for improving the consistency of retrieval technology by focusing on poorly performing topics. The retrieval task in the track is a traditional ad hoc retrieval task where the evaluation methodology emphasizes a system's least effective topics. The most promising approach to improving poorly performing topics
openaire +1 more source
TREC interactive with Cheshire II
Information Processing & Management, 2001zbMATH Open Web Interface contents unavailable due to conflicting licenses.
openaire +1 more source
Text REtrieval Conference (TREC)
2017This entry summarizes the history, results, and impact of the Text REtrieval Conference (TREC), a workshop series designed to support the information retrieval community by building the infrastructure necessary for large-scale evaluation of retrieval ...
openaire +1 more source
TREC and Interactive Track Environments
2008The Text REtrieval Conference (TREC) is sponsored by three agencies—the U.S. National Institute of Standards and Technology (NIST), the U.S. Department of Defense, Advanced Research Projects Agency (DARPA), and the U.S. intelligence community’s Advanced Research and Development Activity (ARDA)—to promote text retrieval research based on large test ...
openaire +1 more source
Annual Review of Information Science and Technology, 2006
Donna K. Harman, Ellen M. Voorhees
openaire +1 more source
Donna K. Harman, Ellen M. Voorhees
openaire +1 more source

