Results 51 to 60 of about 24,451 (274)
Can Audio Captions Be Evaluated With Image Caption Metrics?
ICASSP ...
Zhou, Zelin +5 more
openaire +2 more sources
Single‐molecule DNA flow‐stretch assays for high‐throughput DNA–protein interaction studies
We describe an optimised single‐molecule DNA flow‐stretch assay that visualises DNA–protein interactions in real time. Linear DNA fragments are tethered to a surface and stretched by buffer flow for fluorescence imaging. Using λ and φX174 DNA, this protocol enhances reproducibility and accessibility, providing a versatile approach for studying diverse ...
Ayush Kumar Ganguli +8 more
wiley +1 more source
Image Captioning with Semantic Attention
Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural ...
Fang, Chen +4 more
core +1 more source
Explainability for Medical Image Captioning
Abstract Medical image captioning is the process of generating clinically significant descriptions to medical images, which has many applications among which medical report generation is the most frequent one. In general, automatic captioning of medical images is of great interest for medical experts since it offers assistance in diagnosis, disease ...
Oussalah Mourad +2 more
openaire +2 more sources
Remote Monitoring in Myasthenia Gravis: Exploring Symptom Variability
ABSTRACT Background Myasthenia gravis (MG) is a rare, autoimmune disorder characterized by fluctuating muscle weakness and potential life‐threatening crises. While continuous specialized care is essential, access barriers often delay timely interventions. To address this, we developed MyaLink, a telemedical platform for MG patients.
Maike Stein +13 more
wiley +1 more source
Image Captioning Method Based on Transformer Visual Features Fusion [PDF]
Existing image captioning methods only use regional visual features to generate description statements and ignore the importance of grid visual features. Moreover, as these methods are two-stage approaches, image captioning quality is affected.
Xuebing BAI, Jin CHE, Jinman WU, Yumin CHEN
doaj +1 more source
Enhanced Image Captioning with Color Recognition Using Deep Learning Methods
Automatically describing the content of an image is an interesting and challenging task in artificial intelligence. In this paper, an enhanced image captioning model—including object detection, color analysis, and image captioning—is proposed to ...
Yeong-Hwa Chang +3 more
doaj +1 more source
Visual language grounding is widely studied in modern neural image captioning systems, which typically adopts an encoder-decoder framework consisting of two principal components: a convolutional neural network (CNN) for image feature extraction and a ...
Chen, Hongge +4 more
core +1 more source
ImageCaptioner2: Image Captioner for Image Captioning Bias Amplification Assessment
Most pre-trained learning systems are known to suffer from bias, which typically emerges from the data, the model, or both. Measuring and quantifying bias and its sources is a challenging task and has been extensively studied in image captioning. Despite the significant effort in this direction, we observed that existing metrics lack consistency in the
Bakr, Eslam Mohamed +3 more
openaire +2 more sources
Automatic image captioning [PDF]
We examine the problem of automatic image captioning. Given a training set of captioned images, we want to discover correlations between image features and keywords, so that we can automatically find good keywords for a new image. We experiment thoroughly with multiple design alternatives on large datasets of various content styles, and our proposed ...
Pan J.-Y. +3 more
openaire +2 more sources

