The Cross-Language Evaluation Forum (CLEF) promotes R&D in multilingual information access by :
* (i) developing an infrastructure for the testing, tuning and evaluation of information retrieval systems operating on European languages in both monolingual and cross-language contexts, and
* (ii) creating test-suites of reusable data which can be employed by system developers for benchmarking purposes.
The CLEF Text Retrieval System Evaluation activity is co-ordinated in Europe by the DELOS Network of Excellence for Digital Libraries and organized in collaboration with the US National Institute of Standards and Technology (NIST) and the TREC Conferences.
Originally (1997-99), a track for the evaluation of Cross-Language Information Retrieval (CLIR) systems was included in the well-known US Text REtrieval Conference (TREC) series. This track was coordinated jointly by the US National Institute for Standards and Technology (NIST) and a set of European volunteers. At the end of 1999, in agreement between NIST and the European coordinators, it was decided to move the cross-language evaluation activity for European languages to Europe. The CLEF framework was thus initially set up within the DELOS Network of Excellence for Digital Libraries, an IST project under the Fifth Framework Programme. Since 2000, CLEF evaluation campaigns have been conducted on a yearly basis, addressing many different languages and tasks. CLEF is currently supported by the TrebleCLEF project.
In the framework of CLEF, ELDA is in charge of:
- conducting several user needs and best practices surveys,
- identifying the data and negotiating their distribution rights with the owners,
- coordinating the production of CLEF test suites (CLEF evaluation packages).
During the CLEF campaigns, ELDA also coordinates the evaluation process and/or the production of evaluation material for several tasks.
Contact Khalid Choukri @