Navigation auf uzh.ch

Suche

Institut für Computerlinguistik

Text Technology/Digital Linguistics colloquium HS 2024-25

Time & Location: every 2-3 weeks on Tuesdays from 10:15 am to 12:00 pm in room BIN-2-A.10.
Please note that the room has changed from the previous semester.

Online participation via the MS Teams Team CL Colloquium is also possible.

Responsible: Sina Ahmadi

Colloquium Schedule

 
17.09.2024 Dr. Gail Weiss (EPFL)  
01.10.2024 Dr. Yingqiang Gao Dr. Sina Ahmadi
15.10.2024 Cui Ding Dr. Jannis Vamvas
29.10.2024 Patrick Haller Dr. Reto Gubelmann
12.11.2024 Anastassia Shaitarova Sant Muniesa
26.11.2024 Jan Brasser Lucas Möller (Universität Stuttgart)
10.12.2024 Prof. Dr. Sarah Ebling & IICT team Michelle Wastl

 

17. Sept 2024

Gail Weiss: Thinking Like Transformers - A Practical Session

With the help of the RASP programming language, we can better imagine how transformers---the powerful attention based sequence processing architecture---solve certain tasks. Some tasks, such as simply repeating or reversing an input sequence, have reasonably straightforward solutions, but many others are more difficult. To unlock a fuller intuition of what can and cannot be achieved with transformers, we must understand not just the RASP operations but also how to use them effectively. In this session, I would like to discuss some useful tricks with you in more detail. How is the powerful selector_width operation yielded from the true RASP operations? How can a fixed-depth RASP program perform arbitrary length long-addition, despite the equally large number of potential carry operations such a computation entails? How might a transformer perform in-context reasoning? And are any of these solutions reasonable, i.e., realisable in practice? I will begin with a brief introduction of the base RASP operations to ground our discussion, and then walk us through several interesting task solutions. Following this, and armed with this deeper intuition of how transformers solve several tasks, we will conclude with a discussion of what this implies for how knowledge and computations must spread out in transformer layers and embeddings in practice.

1 Oct 2024

Dr. Yingqiang Gao: Mining Arguments in Scientific Documents

In today's talk, we show the critical role of well-structured scientific texts in writing arguments, which enhances clarity and reduces misinformation while promoting knowledge dissemination. We identified the challenges researchers face in maintaining coherence and factual accuracy during the writing process, highlighting the need for automation through AI-driven tools that integrate text retrieval and generation. Despite advancements in Natural Language Processing and Large Language Models, effective scientific writing assistants face hurdles, particularly in automatic text alignment and the reliability of generated content. To address these issues, we investigated empirical unsupervised methods for retrieving, aligning, and generating arguments in scientific documents, culminating in the development of a web application that applies these argument mining techniques.

Dr. Sina Ahmadi: Tracking Borrowed Words: A Multilingual Contrastive Dataset for Loanword Evaluation

Lexical borrowing, the adoption of words from one language into another, is a ubiquitous linguistic phenomenon influenced by geopolitical, societal, and technological factors. This talk explores lexical borrowing from a computational linguistics perspective. I present our effort to create a novel contrastive dataset comprising sentences with and without loanwords, designed to evaluate the impact of borrowings. Using this dataset, the performance of state-of-the-art machine translation and pretrained language models is assessed, quantifying their behavior and robustness in the presence and absence of loanwords. Our findings provide valuable insights into the challenges lexical borrowing poses for computational models and offer extensive analysis in multilingual contexts.

15 Oct 2024

Cui Ding: Measurement reliability of individual differences in sentence processing

Psycholinguistic theories traditionally assume similar cognitive mechanisms across different speakers. However, researchers have recently begun to recognize the need to account for individual differences that must be considered when explaining human cognition. To address this issue, an increasing body of work is investigating how individual differences interact with human sentence processing. Implicitly, these studies assume that individual effects are replicable over experimental sessions and that the method of assessment (e.g., ET vs SPR) is interchangeable. However, as noted in the reliability paradox (Hedge et al., 2018), this assumption is unwarranted. A crucial first step for a principled investigation of individual differences in sentence processing is establishing their measurement reliability, that is, the correlation of individual-level effects across multiple experimental sessions and methodological contexts. In this talk, I present the first German naturalistic reading corpus with four experimental sessions from each participant (two eye-tracking and two self-paced reading sessions), including a comprehensive assessment of participants' cognitive capacities and reading skills. I deploy a two-task Bayesian hierarchical model to assess the measurement reliability of individual differences among a range of effects in response to predictors of sentence processing difficulty that are well-established at the population level.

Dr. Jannis Vamvas: Towards Vector Representations of Textual Difference

I am introducing a new research project called «InvestigaDiff», which aims to enable synchronization of documents across different languages. Inspired by how programmers use diff tools to highlight changes in code, we are exploring whether similar concepts can be applied to natural language texts, even when they are in different languages. One research direction involves representation learning at the token level. I will present an idea for an approach that uses soft prompts to guide an LLM in rewriting one text into the other, with these soft prompts serving as the vector representations of textual difference.

29 Oct 2024

Patrick Haller: TBA

TBA

Dr. Reto Gubelmann: TBA

TBA

12 Nov 2024

Anastassia Shaitarova: TBA

TBA

Sant Muniesa: TBA

TBA

26 Nov 2024

Jan Brasser: TBA

TBA

Lucas Möller: TBA

TBA

12 Nov 2024

Prof. Dr. Sahra Ebling & Team: TBA

TBA

Michelle Wastl: TBA

TBA