Rico Sennrich: Knowledge Transfer Across Languages and Modalities
At RISE Learning Machines Seminar on April 20, we have the pleasure to listen to Rico Sennrich, University of Zurich, give his talk: Knowledge Transfer Across Languages and Modalities.
– In this talk, I will discuss recent successes (and failures) for the multimodal tasks of speech translation and sign language translation.
Abstract
Large multilingual models have revolutionized natural language processing by unlocking knowledge sharing across tasks and languages. For modalities other than text, such as audio, images, or video, neural architectures commonly used for text have also proven effective, but the beneficial sharing of representations across modalities remains a challenge.
In this talk, I will discuss recent successes (and failures) for the multimodal tasks of speech translation and sign language translation. Both tasks being very low-resourced, what lessons from low-resource text translation can be applied to these multimodal tasks? What unique solutions are required to address the audio and video modality? To what extent is information shared across modalities in multi-task multimodal systems?
About the speaker
Rico Sennrich (PhD 2013, University of Zurich) is an SNSF Professor at the University of Zurich. He is Honorary Fellow at the University of Edinburgh, and an ELLIS Fellow.
He is action editor for Computational Linguistics and ACL Rolling Review, and standing reviewer for TACL. He regularly serves as area chair at main conferences in the field, most recently as senior area chair for ACL 2022 and EACL 2023.
His research focuses on machine learning for natural language processing, specifically high-quality machine translation, transfer learning, and multilingual and multimodal models.