26.05.21 @6pm: Interpretable Semantic Similarity Measurement:
Interpretable Semantic Similarity Measurement
Currently, there is a wide range of solutions to determine the semantic similarity between pieces of textual information. In this sense, solutions such as BERT or ELMo have been able to obtain the best results. However, their neural nature makes them difficult to be interpreted by a human operator. For example, BERT requires 12 layers of neurons plus 12 attention windows, which makes it a black box model. In this talk, we will address the design of solutions that, without renouncing the accuracy of the results, pay special attention to the interpretability of the resulting model. For this, we look at penalized regression, fuzzy logics, and genetic programming schemes. As a result, we have been able to obtain models that obtain quite good results and that can be understood by people from the beginning to the end.
Dr. Jorge Martinez-Gil (Senior Research Scientist, Software Competence Center Hagenberg GmbH)
26.05.2021 @ 6pm CET