Building Static Embeddings from Contextual Ones: Is It Useful for Building Distributional Thesauri?
Abstract
While contextual language models are now dominant in the field of Natural Language Processing, the representations they build at the token level are not always suitable for all uses. In this article, we propose a new method for building word or type-level embeddings from contextual models. This method combines the generalization and the aggregation of token representations. We evaluate it for a large set of English nouns in the perspective of the building of distributional thesauri for extracting semantic similarity relations. Moreover, we analyze the differences of static embeddings and type-level embeddings according to features such as the frequency of words or the type of semantic relations these embeddings account for, showing that the properties of these two types of embeddings can be complementary and exploited for further improving distributional thesauri.
Domains
Document and Text Processing
Origin : Files produced by the author(s)