Existing captioning and gaze prediction approaches do not consider the multiple facets of personality that affect how a viewer extracts meaning from an image. While there are methods that consider personalized captioning, they do not consider personalized perception across modalities, i.e. how a person’s way of looking at an image (gaze) affects the way they describe it (captioning). In this work, we propose a model for modeling cross-modality personalized retrieval. In addition to modeling gaze and captions, we also explicitly model the personality of the users providing these samples. We incorporate constraints that encourage gaze and caption samples on the same image to be close in a learned space; we refer to this as content modeling. We also models tyle: we encourage samples provided by the same user to be close in a separate embedding space, regardless of the image on which they were provided. To leverage the complementary information that content and style constraints provide, we combine the embeddings from both networks. We show that our combined embeddings achieve better performance than existing approaches for cross-modal retrieval.
Nils is a research scientist, with expertise in computer vision, machine learning and natural language processing. Specifically, his research is focused on multi-modal learning, deep learning, metric learning, and reinforcement learning. Nils’ research has been published in conferences such as CVPR, AAAI, BMVC, WACV and workshops in NeurIPS and ICML. Before joining Snap Inc, he worked in ASEA Brown Boveri (ABB) and Educational Testing Services (ETS). Nils earned his PhD in Computer Science from the University of Pittsburgh, his master from University of Sao Paulo, and his bachelor from National University of Trujillo.