One-Shot and Few-Shot Exemplification Modeling

John Harvill, Hee Suk Yoon, Eunseop Yoon, Mark Hasegawa-Johnson, Chang Yoo


Abstract
Exemplification modeling is a task where the goal is to produce a viable example sentence that uses a target word with a target definition. The task is non-trivial for polysemous words, and previous works have only explored settings where ample labeled training data is available. In this paper, we demonstrate that exemplification modeling can be performed without a large labeled training corpus by either changing the format of the task (one-shot) or prompting large language models (few-shot), and ablate key components of our proposed one-shot and few-shot systems. We provide extensive automatic and human evaluations of model performance and find that our proposed one-shot and few-shot approaches perform similarly to a fully supervised baseline. We compare and contrast each method in terms of labeled training dataset size, performance, and model size, and find that each technique has at least one tradeoff that another approach does not.
Anthology ID:
2023.gem-1.7
Volume:
Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
Month:
December
Year:
2023
Address:
Singapore
Editors:
Sebastian Gehrmann, Alex Wang, João Sedoc, Elizabeth Clark, Kaustubh Dhole, Khyathi Raghavi Chandu, Enrico Santus, Hooman Sedghamiz
Venues:
GEM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
76–87
Language:
URL:
https://aclanthology.org/2023.gem-1.7
DOI:
Bibkey:
Cite (ACL):
John Harvill, Hee Suk Yoon, Eunseop Yoon, Mark Hasegawa-Johnson, and Chang Yoo. 2023. One-Shot and Few-Shot Exemplification Modeling. In Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), pages 76–87, Singapore. Association for Computational Linguistics.
Cite (Informal):
One-Shot and Few-Shot Exemplification Modeling (Harvill et al., GEM-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.gem-1.7.pdf