Recently Uploaded Runsets
|zettair__okapi_title_description||trec2.adhoc||by evaluatir||June 25, 2009|
|zettair__pivoted_cosine0.2_title_description||trec2.adhoc||by evaluatir||June 25, 2009|
|zettair__okapi_b0.35_title_description||trec2.adhoc||by evaluatir||June 25, 2009|
|zettair__okapi_b0.25_title_description||trec2.adhoc||by evaluatir||June 25, 2009|
|zettair__dirichlet1500_title_description||trec2.adhoc||by evaluatir||June 25, 2009|
|zettair__hawkapi0.2_title_description||trec2.adhoc||by evaluatir||June 25, 2009|
|zettair__cosine_title_description||trec2.adhoc||by evaluatir||June 25, 2009|
With EvaluatIR.org you can:
- See what the state of the art in retrieval access is, using our database of evaluation results. It includes past retrieval runs submitted to TREC, benchmarks of out-of-the box IR systems and runs published by other researchers. Information about how the results were obtained will be collected and documented wherever possible.
- Privately upload your own runs in TREC runfile format.
- Compare any results in our database with each other or your private uploaded runs - in a range of different ways. A range of different evaluation metrics are available , and results can be compared graphically query-by-query, using statistical significance tests, ranked against other runs, using score standardisation, and more.
- Share your uploaded runs with other researchers, and link to back to relevant publications.
Please keep in mind that the site is very much as work in progress, there will be glitches and missing features.
EvaluatIR: An Online Tool for Evaluating and Comparing IR Systems on William Webber's blog.
The 32nd Annual ACM SIGIR Conference, where we will be presenting a demo of the system.