Results 1 to 10 of about 5,388,665 (365)
No Language Left Behind: Scaling Human-Centered Machine Translation [PDF]
Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today.
Nllb team +38 more
semanticscholar +1 more source
when we were astronauts in trainingCHAR(13) + CHAR(10) we spun around at a breakneck speedCHAR(13) + CHAR(10) in a shining sphere in the darkCHAR(13) + CHAR(10) until our eyes ended upCHAR(13) + CHAR(10) on the other side of everything when we were ...
Ivica Prtenjača +1 more
doaj +1 more source
A model for implementing international networking within the pandemic conditions [PDF]
The introduction of a network form of participants' interaction in the educational process, which gives a number of advantages in the implementation of educational programs, received a lively positive response from both universities and students.
Kotsyubinskaya Liubov Vyacheslavovna +2 more
doaj +1 more source
Zero-shot Image-to-Image Translation [PDF]
Large-scale text-to-image generative models have shown their remarkable ability to synthesize diverse, high-quality images. However, directly applying these models for real image editing remains challenging for two reasons. First, it is hard for users to
Gaurav Parmar +5 more
semanticscholar +1 more source
Large Language Models Are State-of-the-Art Evaluators of Translation Quality [PDF]
We describe GEMBA, a GPT-based metric for assessment of translation quality, which works both with a reference translation and without. In our evaluation, we focus on zero-shot prompting, comparing four prompt variants in two modes, based on the ...
Tom Kocmi, C. Federmann
semanticscholar +1 more source
Implicit Cross-Lingual Word Embedding Alignment for Reference-Free Machine Translation Evaluation
As we know, cross-lingual word embedding alignment is critically important for reference-free machine translation evaluation, where source texts are directly compared with system translations.
Min Zhang +7 more
doaj +1 more source
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension [PDF]
We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
M. Lewis +7 more
semanticscholar +1 more source
Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation [PDF]
Large-scale text-to-image generative models have been a revolutionary breakthrough in the evolution of generative AI, synthesizing diverse images with highly complex visual concepts.
Narek Tumanyan +3 more
semanticscholar +1 more source
Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks [PDF]
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be
Jun-Yan Zhu +3 more
semanticscholar +1 more source
Multilingual Denoising Pre-training for Neural Machine Translation [PDF]
This paper demonstrates that multilingual denoising pre-training produces significant performance gains across a wide variety of machine translation (MT) tasks.
Yinhan Liu +7 more
semanticscholar +1 more source

