Navigation auf uzh.ch
Our paperSignCLIP: Connecting Text and Sign Language by Contrastive Learning is accepted and presented at EMNLP 2024! SignCLIP re-purposes CLIP (Contrastive Language-Image Pretraining) to project spoken language text and sign language videos, two classes of natural languages of distinct modalities, into the same space. Thecode and a demo notebook are available.
Author team: Zifan Jiang, Gerard Sant, Amit Moryossef, Mathias Müller, Rico Sennrich, Sarah Ebling