Kontaktperson
Olof Mogren
Senior Researcher
Kontakta OlofPå RISE Learning Machines Seminar den 26 januari 2023 ger Lena Voita, FAIR, sin presentation: A Journey on Interpretability Methods in NLP: Inside-Out and Back. Seminariet är på engelska.
– In this talk, I will illustrate several approaches, motivations and ideas behind analysis of NLP models.
In this talk, I will illustrate several approaches, motivations and ideas behind analysis of NLP models. As an example, we will take the standard Transformer model trained for the Machine Translation task and will look at it from different points of view. We will start our journey from examining model components and seeing how, and whether, model’s inductive biases facilitate performing the task. Next, we turn to the kinds of information the model relies on to make its predictions. Here we will use attribution methods and will see how some potentially pathological behaviours look on the inside. Finally, we will turn our view inside-out, i.e. from inner workings of the model to analysing its outputs. In this part, we will discover that the way the model learns to translate is very similar to how humans do it: from learning word-by-word translation first to becoming more fluent later. Most importantly, the two views (from the inside and from the outside) show the same process, and we will see how this process is reflected in these two types of analysis.
Elena (Lena) Voita is a Research Scientist at Facebook AI Research (joined recently). She is mostly interested in understanding what and how neural models learn. Previously, she was a PhD student at the University of Edinburgh supervised by Ivan Titov and Rico Sennrich, was awarded Facebook PhD Fellowship, worked as a Research Scientist at Yandex Research side by side with the Yandex Translate team. She enjoys writing blog posts and teaching; a public version of (a part of) her NLP course is available at NLP Course For You.