Results 221 to 230 of about 358,333 (353)

Large Language Models With Contrastive Decoding Algorithm for Hallucination Mitigation in Low‐Resource Languages

open access: yesCAAI Transactions on Intelligence Technology, EarlyView.
ABSTRACT Neural machine translation (NMT) has advanced with deep learning and large‐scale multilingual models, yet translating low‐resource languages often lacks sufficient training data and leads to hallucinations. This often results in translated content that diverges significantly from the source text.
Zan Hongying   +4 more
wiley   +1 more source

Unified Neural Lexical Analysis Via Two‐Stage Span Tagging

open access: yesCAAI Transactions on Intelligence Technology, EarlyView.
ABSTRACT Lexical analysis is a fundamental task in natural language processing, which involves several subtasks, such as word segmentation (WS), part‐of‐speech (POS) tagging, and named entity recognition (NER). Recent works have shown that taking advantage of relatedness between these subtasks can be beneficial.
Yantuan Xian   +5 more
wiley   +1 more source

Transforming language research from classic desktops to virtual environments. [PDF]

open access: yesSci Rep
Rocabado F   +4 more
europepmc   +1 more source

Tibetan Medical Named Entity Recognition Based on Syllable‐Word‐Sentence Embedding Transformer

open access: yesCAAI Transactions on Intelligence Technology, EarlyView.
ABSTRACT Tibetan medical named entity recognition (Tibetan MNER) involves extracting specific types of medical entities from unstructured Tibetan medical texts. Tibetan MNER provide important data support for the work related to Tibetan medicine. However, existing Tibetan MNER methods often struggle to comprehensively capture multi‐level semantic ...
Jin Zhang   +9 more
wiley   +1 more source

Tibetan Few‐Shot Learning Model With Deep Contextualised Two‐Level Word Embeddings

open access: yesCAAI Transactions on Intelligence Technology, EarlyView.
ABSTRACT Few‐shot learning is the task of identifying new text categories from a limited set of training examples. The two key challenges in few‐shot learning are insufficient understanding of new samples and imperfect modelling. The uniqueness of low‐resource languages lies in their limited linguistic resources, which directly leads to the difficulty ...
Ziyue Zhang   +11 more
wiley   +1 more source

Home - About - Disclaimer - Privacy