Results 301 to 310 of about 4,624,323 (348)
Some of the next articles are maybe not open access.
Understanding Cross-Lingual Alignment - A Survey
Annual Meeting of the Association for Computational LinguisticsCross-lingual alignment, the meaningful similarity of representations across languages in multilingual language models, has been an active field of research in recent years.
Katharina Hämmerl +2 more
semanticscholar +1 more source
AdaMergeX: Cross-Lingual Transfer with Large Language Models via Adaptive Adapter Merging
North American Chapter of the Association for Computational LinguisticsAs an effective alternative to the direct fine-tuning on target tasks in specific languages, cross-lingual transfer addresses the challenges of limited training data by decoupling ''task ability'' and ''language ability'' by fine-tuning on the target ...
Yiran Zhao +4 more
semanticscholar +1 more source
LLMs Beyond English: Scaling the Multilingual Capability of LLMs with Cross-Lingual Feedback
Annual Meeting of the Association for Computational LinguisticsTo democratize large language models (LLMs) to most natural languages, it is imperative to make these models capable of understanding and generating texts in many languages, in particular low-resource ones.
Wen Lai, Mohsen Mesgar, Alexander Fraser
semanticscholar +1 more source
Cross-Lingual Entity Alignment via Joint Attribute-Preserving Embedding
International Workshop on the Semantic Web, 2017Entity alignment is the task of finding entities in two knowledge bases (KBs) that represent the same real-world object. When facing KBs in different natural languages, conventional cross-lingual entity alignment methods rely on machine translation to ...
Zequn Sun, Wei Hu, Chengkai Li
semanticscholar +1 more source
Breaking the Curse of Multilinguality with Cross-lingual Expert Language Models
Conference on Empirical Methods in Natural Language ProcessingDespite their popularity in non-English NLP, multilingual language models often underperform monolingual ones due to inter-language competition for model parameters. We propose Cross-lingual Expert Language Models (X-ELM), which mitigate this competition
Terra Blevins +6 more
semanticscholar +1 more source
Probing the Emergence of Cross-lingual Alignment during LLM Training
Annual Meeting of the Association for Computational LinguisticsMultilingual Large Language Models (LLMs) achieve remarkable levels of zero-shot cross-lingual transfer performance. We speculate that this is predicated on their ability to align languages without explicit supervision from parallel sentences.
Hetong Wang +2 more
semanticscholar +1 more source
Annual Meeting of the Association for Computational Linguistics
Text watermarking technology aims to tag and identify content produced by large language models (LLMs) to prevent misuse. In this study, we introduce the concept of cross-lingual consistency in text watermarking, which assesses the ability of text ...
Zhiwei He +7 more
semanticscholar +1 more source
Text watermarking technology aims to tag and identify content produced by large language models (LLMs) to prevent misuse. In this study, we introduce the concept of cross-lingual consistency in text watermarking, which assesses the ability of text ...
Zhiwei He +7 more
semanticscholar +1 more source
Cross-lingual question answering
2012Innerhalb der letzten zehn Jahre hat sich Question Answering zu einem intensiv erforschten Themengebiet gewandelt, es stellt den nächsten Schritt des Information Retrieval dar, mit dem Bestreben einen präziseren Zugang zu großen Datenbeständen von verfügbaren Informationen bereitzustellen.
openaire +1 more source
North American Chapter of the Association for Computational Linguistics
Despite their strong ability to retrieve knowledge in English, current large language models show imbalance abilities in different languages. Two approaches are proposed to address this, i.e., multilingual pretraining and multilingual instruction tuning.
Changjiang Gao +5 more
semanticscholar +1 more source
Despite their strong ability to retrieve knowledge in English, current large language models show imbalance abilities in different languages. Two approaches are proposed to address this, i.e., multilingual pretraining and multilingual instruction tuning.
Changjiang Gao +5 more
semanticscholar +1 more source
Cross-lingual font style transfer with full-domain convolutional attention
Pattern RecognitionIn this paper, we propose a new cross-lingual font style transfer model, FCAGAN, which enables font style transfer between di ff erent languages by observing a small number of samples.
Hui Zhao +5 more
semanticscholar +1 more source

