Exploring Annotator Rationales for Active Learning with Transformers
12/14/2022 - For decades, natural language processing (NLP) has provided methods for computers to understand language in a way that mimics humans. Since they are built on transformers, complex neural network layers, these large language models' decision making processes are usually incomprehensible to humans and require large amounts of data to be trained properly. In the past, researchers have tried to remedy this by having models explain their decisions by providing rationales, short excerpts of data that contributed most to the label.