Results 31 to 40 of about 2,484,152 (338)
MultiMAE: Multi-modal Multi-task Masked Autoencoders [PDF]
We propose a pre-training strategy called Multi-modal Multi-task Masked Autoencoders (MultiMAE). It differs from standard Masked Autoencoding in two key aspects: I) it can optionally accept additional modalities of information in the input besides the ...
Roman Bachmann +3 more
semanticscholar +1 more source
Cross-Modal Contrastive Learning for Text-to-Image Generation [PDF]
The output of text-to-image synthesis systems should be coherent, clear, photo-realistic scenes with high semantic fidelity to their conditioned text descriptions.
Han Zhang +4 more
semanticscholar +1 more source
Aircraft performance models play a key role in airline operations, especially in planning a fuel-efficient flight. In practice, manufacturers provide guidelines which are slightly modified throughout the aircraft life cycle via the tuning of a single ...
Florent Dewez +2 more
doaj +1 more source
Structural Refinement for the Modal nu-Calculus [PDF]
We introduce a new notion of structural refinement, a sound abstraction of logical implication, for the modal nu-calculus. Using new translations between the modal nu-calculus and disjunctive modal transition systems, we show that these two specification
A.N. Prior +20 more
core +4 more sources
The expressive power of modal logic with inclusion atoms [PDF]
Modal inclusion logic is the extension of basic modal logic with inclusion atoms, and its semantics is defined on Kripke models with teams. A team of a Kripke model is just a subset of its domain. In this paper we give a complete characterisation for the
Hella, Lauri, Stumpf, Johanna
core +2 more sources
Thermal Resistance of Gray Modal and Micromodal Socks
Men's socks were produced on a Lonati circular knitting machine in 18 different combinations in multi-plated plain jersey from basic modal and basic micro modal yarn with the addition of cotton or PA multifilament yarn and elastane yarn in the sock cuff.
Skenderi Zenun +3 more
doaj +1 more source
LayoutLMv2: Multi-modal Pre-training for Visually-rich Document Understanding [PDF]
Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents.
Yang Xu +11 more
semanticscholar +1 more source
Modal Verb “Shall” in Contemporary American English: A Corpus-Based Study
This paper explored the modal verb shall in formal and informal writings in academic and fiction registers. It focused on the frequencies of shall across academic and fiction domains in contemporary American English and the differences in the usage of ...
Maria Caroline Samodra, Barli Bram
doaj +1 more source
Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training [PDF]
We propose Unicoder-VL, a universal encoder that aims to learn joint representations of vision and language in a pre-training manner. Borrow ideas from cross-lingual pre-trained models, such as XLM (Lample and Conneau 2019) and Unicoder (Huang et al ...
Gen Li +5 more
openalex +3 more sources
Just as Boolean rules define Boolean categories, the Boolean operators define higher-order Boolean categories referred to as modal categories. We examine the similarity order between these categories and the standard category of logical identity (i.e ...
Vigo , Dr. Ronaldo
core +1 more source

