Short paper
Aligners: Decoupling LLMs and Alignment
Lilian Ngweta, Mayank Agarwal, et al.
ICLR 2024
We describe a large, high-quality benchmark for the evaluation of Mention Detection tools. The benchmark contains annotations of both named entities as well as other types of entities, annotated on different types of text, ranging from clean text taken from Wikipedia, to noisy spoken data. The benchmark was built through a highly controlled crowd sourcing process to ensure its quality. We describe the benchmark, the process and the guidelines that were used to build it. We then demonstrate the results of a state-of-the-art system running on that benchmark.
Lilian Ngweta, Mayank Agarwal, et al.
ICLR 2024
Mihaela Bornea, Lin Pan, et al.
AAAI 2021
George Kour, Marcel Zalmanovici, et al.
EMNLP 2023
Momin Abbas, Muneeza Azmat, et al.
ICLR 2025