Results 241 to 250 of about 45,366 (262)
Some of the next articles are maybe not open access.

Question answering in TREC

Proceedings of the tenth international conference on Information and knowledge management - CIKM'01, 2001
Traditional text retrieval systems return a ranked list of documents in response to a user's request. While a ranked list of documents can be an appropriate response for the user, frequently it is not. Usually it would be better for the system to provide the answer itself instead of requiring the user to search for the answer in a set of documents. The
openaire   +1 more source

Reflections on TREC

1994
This paper discusses the Text REtrieval Conferences (TREC) programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within it in terms of the approaches to system performance factors these represent; analyses the test results ...
openaire   +1 more source

TREC-Style Evaluations

2013
TREC-style evaluation is generally considered to be the use of test collections, an evaluation methodology referred to as the Cranfield paradigm. This paper starts with a short description of the original Cranfield experiment, with the emphasis on the how and why of the Cranfield framework. This framework is then updated to cover the more recent "batch"
openaire   +1 more source

The TREC robust retrieval track

ACM SIGIR Forum, 2005
The robust retrieval track explores methods for improving the consistency of retrieval technology by focusing on poorly performing topics. The retrieval task in the track is a traditional ad hoc retrieval task where the evaluation methodology emphasizes a system's least effective topics. The most promising approach to improving poorly performing topics
openaire   +1 more source

TREC interactive with Cheshire II

Information Processing & Management, 2001
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
openaire   +1 more source

Text REtrieval Conference (TREC)

2017
This entry summarizes the history, results, and impact of the Text REtrieval Conference (TREC), a workshop series designed to support the information retrieval community by building the infrastructure necessary for large-scale evaluation of retrieval ...
openaire   +1 more source

TREC and Interactive Track Environments

2008
The Text REtrieval Conference (TREC) is sponsored by three agencies—the U.S. National Institute of Standards and Technology (NIST), the U.S. Department of Defense, Advanced Research Projects Agency (DARPA), and the U.S. intelligence community’s Advanced Research and Development Activity (ARDA)—to promote text retrieval research based on large test ...
openaire   +1 more source

TREC: An overview

Annual Review of Information Science and Technology, 2006
Donna K. Harman, Ellen M. Voorhees
openaire   +1 more source

State-of-the-art in biomedical literature retrieval for clinical cases: a survey of the TREC 2014 CDS track

Information Retrieval, 2015
Kirk Roberts   +2 more
exaly  

Home - About - Disclaimer - Privacy