Results 61 to 70 of about 3,185,640 (378)

A Survey of the State of Explainable AI for Natural Language Processing [PDF]

open access: yesAACL, 2020
Recent years have seen important advances in the quality of state-of-the-art models, but this has come at the expense of models becoming less interpretable.
Marina Danilevsky   +5 more
semanticscholar   +1 more source

Sublemma-Based Neural Machine Translation

open access: yesComplexity, 2021
Powerful deep learning approach frees us from feature engineering in many artificial intelligence tasks. The approach is able to extract efficient representations from the input data, if the data are large enough. Unfortunately, it is not always possible
Thien Nguyen, Huu Nguyen, Phuoc Tran
doaj   +1 more source

Brains and algorithms partially converge in natural language processing

open access: yesCommunications Biology, 2022
Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those of the human brain. However, what drives this similarity remains currently unknown.
C. Caucheteux, J. King
semanticscholar   +1 more source

Bootstrapping Knowledge Graphs From Images and Text

open access: yesFrontiers in Neurorobotics, 2019
The problem of generating structured Knowledge Graphs (KGs) is difficult and open but relevant to a range of tasks related to decision making and information augmentation.
Jiayuan Mao   +7 more
doaj   +1 more source

Natural‐language processing applied to an ITS interface [PDF]

open access: yes, 1994
The aim of this paper is to show that with a subset of a natural language, simple systems running on PCs can be developed that can nevertheless be an effective tool for interfacing purposes in the building of an Intelligent Tutoring System (ITS).
Fischetti, Enrico, Gisolfi, Antonio
core   +2 more sources

Unsupervised Chunking Based on Graph Propagation from Bilingual Corpus

open access: yesThe Scientific World Journal, 2014
This paper presents a novel approach for unsupervised shallow parsing model trained on the unannotated Chinese text of parallel Chinese-English corpus. In this approach, no information of the Chinese side is applied. The exploitation of graph-based label
Ling Zhu, Derek F. Wong, Lidia S. Chao
doaj   +1 more source

Natural language processing in-and-for design research

open access: yesDesign Science, 2022
We review the scholarly contributions that utilise natural language processing (NLP) techniques to support the design process. Using a heuristic approach, we gathered 223 articles that are published in 32 journals within the period 1991–present.
L. Siddharth   +2 more
doaj   +1 more source

Five sources of bias in natural language processing

open access: yesLanguage and Linguistics Compass, 2021
Recently, there has been an increased interest in demographically grounded bias in natural language processing (NLP) applications. Much of the recent work has focused on describing bias and providing an overview of bias in a larger context.
E. Hovy, Shrimai Prabhumoye
semanticscholar   +1 more source

Supporting Undotted Arabic with Pre-trained Language Models [PDF]

open access: yesarXiv, 2021
We observe a recent behaviour on social media, in which users intentionally remove consonantal dots from Arabic letters, in order to bypass content-classification algorithms. Content classification is typically done by fine-tuning pre-trained language models, which have been recently employed by many natural-language-processing applications.
arxiv  

Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets [PDF]

open access: yesBioNLP@ACL, 2019
Inspired by the success of the General Language Understanding Evaluation benchmark, we introduce the Biomedical Language Understanding Evaluation (BLUE) benchmark to facilitate research in the development of pre-training language representations in the ...
Yifan Peng, Shankai Yan, Zhiyong Lu
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy