Results 181 to 190 of about 43,019 (219)
Some of the next articles are maybe not open access.
2001
Starting in 1997, the National Institute of Standards and Technology conducted 3 years of evaluation of cross-language information retrieval systems in the Text REtrieval Conference (TREC). Twentytwo participating systems used topics (test questions) in one language to retrieve documents written in English, French, German, and Italian.
Harman D +6 more
openaire +3 more sources
Starting in 1997, the National Institute of Standards and Technology conducted 3 years of evaluation of cross-language information retrieval systems in the Text REtrieval Conference (TREC). Twentytwo participating systems used topics (test questions) in one language to retrieve documents written in English, French, German, and Italian.
Harman D +6 more
openaire +3 more sources
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, 2011
Traditional tools for information retrieval (IR) evaluation, such as TREC's trec_eval, have outdated command-line interfaces with many unused features, or 'switches', accumulated over the years. They are usually seen as cumbersome applications by new IR researchers, steepening the learning curve.
Savvas A. Chatzichristofis +2 more
openaire +1 more source
Traditional tools for information retrieval (IR) evaluation, such as TREC's trec_eval, have outdated command-line interfaces with many unused features, or 'switches', accumulated over the years. They are usually seen as cumbersome applications by new IR researchers, steepening the learning curve.
Savvas A. Chatzichristofis +2 more
openaire +1 more source
Information Processing & Management, 1995
Abstract This paper discusses the Text REtrieval Conferences (TREC) programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within it in terms of the approaches to system performance factors these represent; analyses the ...
openaire +1 more source
Abstract This paper discusses the Text REtrieval Conferences (TREC) programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within it in terms of the approaches to system performance factors these represent; analyses the ...
openaire +1 more source
Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, 2007
We propose a novel method of analysing data gathered fromTREC or similar information retrieval evaluation experiments. We define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics.
MIZZARO, Stefano, ROBERTSON Stephen
openaire +1 more source
We propose a novel method of analysing data gathered fromTREC or similar information retrieval evaluation experiments. We define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics.
MIZZARO, Stefano, ROBERTSON Stephen
openaire +1 more source
Proceedings of the tenth international conference on Information and knowledge management - CIKM'01, 2001
Traditional text retrieval systems return a ranked list of documents in response to a user's request. While a ranked list of documents can be an appropriate response for the user, frequently it is not. Usually it would be better for the system to provide the answer itself instead of requiring the user to search for the answer in a set of documents. The
openaire +1 more source
Traditional text retrieval systems return a ranked list of documents in response to a user's request. While a ranked list of documents can be an appropriate response for the user, frequently it is not. Usually it would be better for the system to provide the answer itself instead of requiring the user to search for the answer in a set of documents. The
openaire +1 more source
1994
This paper discusses the Text REtrieval Conferences (TREC) programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within it in terms of the approaches to system performance factors these represent; analyses the test results ...
openaire +1 more source
This paper discusses the Text REtrieval Conferences (TREC) programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within it in terms of the approaches to system performance factors these represent; analyses the test results ...
openaire +1 more source
2013
TREC-style evaluation is generally considered to be the use of test collections, an evaluation methodology referred to as the Cranfield paradigm. This paper starts with a short description of the original Cranfield experiment, with the emphasis on the how and why of the Cranfield framework. This framework is then updated to cover the more recent "batch"
openaire +1 more source
TREC-style evaluation is generally considered to be the use of test collections, an evaluation methodology referred to as the Cranfield paradigm. This paper starts with a short description of the original Cranfield experiment, with the emphasis on the how and why of the Cranfield framework. This framework is then updated to cover the more recent "batch"
openaire +1 more source
The TREC robust retrieval track
ACM SIGIR Forum, 2005The robust retrieval track explores methods for improving the consistency of retrieval technology by focusing on poorly performing topics. The retrieval task in the track is a traditional ad hoc retrieval task where the evaluation methodology emphasizes a system's least effective topics. The most promising approach to improving poorly performing topics
openaire +1 more source
TREC interactive with Cheshire II
Information Processing & Management, 2001zbMATH Open Web Interface contents unavailable due to conflicting licenses.
openaire +1 more source

