- 6
- 2
- 1
- 1
Presentation
I am a final-year Ph.D. candidate in Linguistics and Computer Science at Aix-Marseille Université, affiliated with both the Laboratoire Parole et Langage (UMR 7309) and the Laboratoire d'Informatique et Systèmes (UMR 7020), supervised by Philippe Blache, Magalie Ochs, Roxane Bertrand and Stéphane Rauzy. I also spent 4 months at Carnegie Mellon University supervised by Professor Louis-Philippe Morency.
My research centers on understanding the mechanisms of communication and mutual comprehension, exploring both behavioral and cognitive dimensions to identify factors that enhance the quality of conversations. Specifically, my thesis work focuses on conversational feedback.
My expertise includes:
* Creating multimodal corpora encompassing audio-video and neurophysiological data with annotations across various modalities. Notably, I created the SMYLE corpus.
* Engaging in computational modeling and statistical analysis to develop predictive feedback models using interpretable multimodal features. Currently, I am engaged in defining listening styles through methods such as clustering with interpretable features.
* Employing experimental approaches to design and conduct online experiments for evaluating feedback perception. Additionally, I study the impact of distractions on listener behavior using experimental aspects of the SMYLE corpus.
I would be delight to connect with anyone interested in discussing my research or exploring collaboration opportunities. Please don't hesitate to reach out to me at auriane.boudin@univ-amu.fr
Publications
- 9
- 6
- 4
- 2
- 1
- 3
- 3
- 3
- 1
|
A multimodal model for predicting feedback position and type during conversationSpeech Communication, 2024, 159, pp.103066. ⟨10.1016/j.specom.2024.103066⟩
Journal articles
hal-04551398v1
|
|
How is your feedback perceived? An experimental study of anticipated and delayed conversational feedbackJASA Express Letters, 2024, 4 (7), ⟨10.1121/10.0026448⟩
Journal articles
hal-04687738v1
|
|
A Forum Theatre Corpus for Discrimination AwarenessFrontiers in Computer Science, 2023, 5, ⟨10.3389/fcomp.2023.1081586⟩
Journal articles
hal-03922947v1
|
|
A multimodal approach for modeling engagement in conversationFrontiers in Computer Science, 2023, 5, ⟨10.3389/fcomp.2023.1062342⟩
Journal articles
hal-04011927v1
|
|
Principes et outils pour l'annotation des corpusTravaux Interdisciplinaires sur la Parole et le Langage, 2022, Panorama des recherches au Laboratoire Parole et Langage, 38, ⟨10.4000/tipa.5424⟩
Journal articles
hal-03917814v1
|
|
The Distracted Ear: How Listeners Shape Conversational DynamicsLREC-COLING 2024, May 2024, Torino, Italy
Conference papers
hal-04569106v1
|
|
SMYLE: A new multimodal resource of talk-in-interaction including neuro-physiological signalINTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI ’23 Companion), Oct 2023, Paris, France. ⟨10.1145/3610661.3616188⟩
Conference papers
hal-04195031v1
|
|
Interdisciplinary Corpus-based Approach for Exploring Multimodal Conversational FeedbackICMI '22: International Conference on Multimodal Interfaces, Nov 2022, Bengaluru, India. pp.705-710, ⟨10.1145/3536221.3557029⟩
Conference papers
hal-04688897v1
|
|
Are you Smiling When I am Speaking?Proceedings of the Smiling and Laughter across Contexts and the Life-span Workshop @LREC2022, Jun 2022, Marseille, France
Conference papers
hal-03713867v1
|
|
A Multimodal Model for Predicting Conversational FeedbacksInternational Conference on Text, Speech, and Dialogue (TSD ), 2021, Olomouc, Czech Republic. ⟨10.1007/978-3-030-83527-9_46⟩
Conference papers
hal-03331446v2
|