| PDF - Published Version (1MB) |
- URN to cite this document:
- urn:nbn:de:bvb:355-epub-449533
- DOI to cite this document:
- 10.5283/epub.44953
Abstract
No existing evaluation infrastructure for shared tasks currently supports both reproducible on- and offline experiments. In this work, we present an architecture that ties together both types of experiments with a focus on reproducibility. The readers are provided with a technical description of the infrastructure and details of how to contribute their own experiments to upcoming evaluation tasks.
Owner only: item control page