Results 21 to 30 of about 245,725 (289)
Analyzing image-text relations for semantic media adaptation and personalization [PDF]
Progress in semantic media adaptation and personalisation requires that we know more about how different media types, such as texts and images, work together in multimedia communication.
Hughes, Mark +3 more
core +1 more source
A Flexible pragmatics-driven language generator for animated agents [PDF]
This paper describes the NECA MNLG; a fully implemented Multimodal Natural Language Generation module. The MNLG is deployed as part of the NECA system which generates dialogues between animated agents.
Piwek, Paul
core +7 more sources
In the article the author analyzes interjections as signals of emotions in order to determine their functions in creating the perlocutionary effect of comics, which is especially relevant in the era of ever-increasing emotionalization of the global ...
N. V. Panina
doaj +1 more source
Visually Grounded Word Embeddings and Richer Visual Features for Improving Multimodal Neural Machine Translation [PDF]
In Multimodal Neural Machine Translation (MNMT), a neural model generates a translated sentence that describes an image, given the image itself and one source descriptions in English.
Delbrouck, Jean-Benoit +2 more
core +2 more sources
Improving Context Modelling in Multimodal Dialogue Generation
In this work, we investigate the task of textual response generation in a multimodal task-oriented dialogue system. Our work is based on the recently released Multimodal Dialogue (MMD) dataset (Saha et al., 2017) in the fashion domain.
Agarwal, Shubham +3 more
core +1 more source
When Memories Make a Difference: Multimodal Literacy Narratives for Preservice ELA Methods Students [PDF]
This article examines multimodal literacy narrative projects designed by students in a methods of teaching course for secondary preservice English Language Arts teachers.
Hope, Kate
core +1 more source
The role of avatars in e-government interfaces [PDF]
This paper investigates the use of avatars to communicate live message in e-government interfaces. A comparative study is presented that evaluates the contribution of multimodal metaphors (including avatars) to the usability of interfaces for e ...
C.F. Camerer +5 more
core +2 more sources
Multimodal Fusion with Dual-Attention Based on Textual Double-Embedding Networks for Rumor Detection
Rumors may bring a negative impact on social life, and compared with pure textual rumors, online rumors with multiple modalities at the same time are more likely to mislead users and spread, so multimodal rumor detection cannot be ignored.
Huawei Han +4 more
doaj +1 more source
Learning Multimodal Word Representation via Dynamic Fusion Methods
Multimodal models have been proven to outperform text-based models on learning semantic word representations. Almost all previous multimodal models typically treat the representations from different modalities equally.
Wang, Shaonan +2 more
core +1 more source
Using Sound to Enhance Users’ Experiences of Mobile Applications [PDF]
The latest smartphones with GPS, electronic compass, directional audio, touch screens etc. hold potentials for location based services that are easier to use compared to traditional tools.
Liljedahl, Mats, Papworth, Nigel
core +4 more sources

