Evaluation events organized by ELRA

ELRA 18th Anniversary

On November 19-20, 2013, ELRA celebrated its 18th anniversary and invited NLP experts from all over the world to participate in the workshop ELRA international workshop on sharing Language Resources: landscape and trends.

Plan conferences to avoid overlaps, boost Language Resources identification and discovery, promote best practices in Language Resource citation in publications, encourage Language Resources and Tools sharing through infrastructures and strengthen relations with neighbouring communities were among the main themes discussed by the participants.

As part of the CLARA Marie Curie Initial Training Network, ELDA and the University of Copenhagen have organized a four-day thematic training course on evaluation of Human Language Technologies on November 26-29 in Paris. The course was attended by 10 CLARA fellows and other researchers from the HLT field.

By introducing and describing some evaluation concepts (comparative evaluation versus competition, technology evaluation versus usage/usability evaluation), the course has provided both ESRs (Early Stage Researchers) and ERs (Experienced Researchers) the background and skills to use and implement state-of-the– art evaluation tools and techniques for speech technologies, grammars and parsing, machine translation and speech-to- speech translation, information retrieval/filtering, multi-modal interfaces, etc.).


The programme file provides both the schedule of the course and the slides presented during the course.

Invited Speakers

  • Victoria Arranz, ELDA, France
  • Khalid Choukri, ELDA, France
  • Edouard Geoffrois, DGA , France
  • Olivier Hamon, ELDA, France
  • Bente Maegaard, University of Copenhagen, Denmark
  • Aurélien Max, LIMSI, France
  • Djamel Mostefa, ELDA, France
  • Patrick Paroubek, LIMSI, France
  • Anders Søgaard, University of Copenhagen, Denmark

Looking into the Future of Evaluation @ LREC 2008 – 27 May 2008, Marrakech

Held in conjunction with the 6th International Language Resources and Evaluation Conference

View/download the Workshop’s Proceedings (PDF)


Workshop Chairing Team

  • Gregor Thurmair, Linguatec Sprachtechnologien GmbH, Germany – chair
  • Khalid Choukri, ELDA – Evaluations and Language resources Distribution Agency, France – co-chair
  • Bente Maegaard, CST, University of Copenhagen, Denmark – co-chair


Workshop Programme

09:00 Welcome and Introduction

09:15 Technology Advancement has Required Evaluations to Change Data and Tasks — and now Metrics Mark Przybocki; NIST, USA

10:00 Explicit and Implicit Requirements of Technology Evaluations: Implications for Test Data Creation Lauren Friedman, Stephanie Strassel, Meghan Lammie Glenn; Linguistic Data Consortium, USA

10:35 Coffee break

11:00 Automated MT Evaluation for Error Analysis: Automatic Discovery of Potential Translation Errors for Multiword Expressions Bogdan Babych, Anthony Hartley; Centre for Translation Studies, University of Leeds, United Kingdom

11:35 Discussion

12:15 Reference-based vs. Task-based Evaluation of Human Language Technology Andrei Popescu-Belis; IDIAP Research Institute, Switzerland

12:50 FEIRI: Extending ISLE’s FEMTI for the Evaluation of a Specialized Application in Information Retrieval Keith J. Miller; The MITRE Corporation, USA

13:25 Lunch

14:30 Discussion

15:15 Evaluating a Natural Language Processing Approach in Arabic Information Retrieval Nasredine Semmar, Laib Meriama, Christian Fluhr; CEA, LIST, Laboratoire d’ingénierie de la Connaissance Multimédia Multilingue, France, NewPhenix, France

15:50 Coffee break

16:20 A Review of the Benefits and Issues of Speaker Verification Evaluation Campaigns Asmaa El Hannani, Jean Hennebert; Department of Computer Science, University of Sheffield, UK, HESSO, Business Information Systems, Switzerland and University of Fribourg, Switzerland

16:55 Field Testing of an Interactive Question-Answering Character Ron Artstein, Sudeep Gandhe, Anton Leuski and David Traum; Institute for Creative Technologies, University of Southern California, USA

17:30 Discussion and conclusions

18:15 Close


HLT Evaluation Workshop in Malta – 1st and 2 December 2005


To celebrate ELRA’s 10th anniversary, a 2-day workshop dedicated to Human Language Technologies (HLT) Evaluation was held in Malta on December 1st and 2nd 2005.

In organizing this workshop, ELRA, whose main missions over the past decade have been to promote language resources for the HLT sector, and to promote evaluation of language engineering technologies, intended to bring together the HLT Evaluation key players to discuss the HLT evaluation from various perspectives: general principles and purposes, technologies, past and on-going evaluation projects, worldwide initiatives, etc.

The main objective of the workshop was to allow a fruitful brainstorming on the HLT evaluation starting from what is being done today and what should be done better, differently, etc. All sectors of HLT were be addressed (speech technologies, grammars and parsing, machine translation and speech to speech translation, information retrieval/filtering, multimodal interfaces, etc.). The number of participants was limited to about 50 to make the event very productive.