Navigation auf uzh.ch

Suche

Department of Computational Linguistics

What Do We Sacrifice Through Distilled Embedding Models?

Summary

Literature shows that cheap embedding models can be created via distillation from large embedding models, with little loss of performance. How true is this for Multilingual Models in Cross Lingual settings? Within this thesis, we put this hypothesis to the test and through a series of Multi and Cross Lingual evaluations.

If interested, please send an email addressed to all three of us for maximum visibility.

Requirements

  • Machine Learning
  • Python/PyTorch