Results 21 to 30 of about 544,807 (273)

Classification of Russian Texts by Genres Based on Modern Embeddings and Rhythm

open access: yesМоделирование и анализ информационных систем, 2022
The article investigates modern vector text models for solving the problem of genre classification of Russian-language texts. Models include ELMo embeddings, BERT language model with pre-training and a complex of numerical rhythm features based on lexico-
Ksenia Vladimirovna Lagutina
doaj   +1 more source

Aspect-Based Sentiment Analysis on Indonesian Restaurant Review Using a Combination of Convolutional Neural Network and Contextualized Word Embedding

open access: yesIJCCS (Indonesian Journal of Computing and Cybernetics Systems), 2021
Someone's opinion on a product or service that is poured through a review is something that is quite important for the owner or potential customer. However, the large number of reviews makes it difficult for them to analyze the information contained in ...
Putri Rizki Amalia, Edi Winarko
doaj   +1 more source

GP-GCN: Global features of orthogonal projection and local dependency fused graph convolutional networks for aspect-level sentiment classification

open access: yesConnection Science, 2022
Aspect-level sentiment classification, a significant task of fine-grained sentiment analysis, aims to identify the sentimental information expressed in each aspect of a given sentence The existing methods combine global features and local structures to ...
Subo Wei   +4 more
doaj   +1 more source

ParsBERT: Transformer-based Model for Persian Language Understanding

open access: yes, 2020
The surge of pre-trained language models has begun a new era in the field of Natural Language Processing (NLP) by allowing us to build powerful language models.
Farahani, Marzieh   +3 more
core   +1 more source

Neural Language Models for Nineteenth-Century English

open access: yesJournal of Open Humanities Data, 2021
We present four types of neural language models trained on a large historical dataset of books in English, published between 1760 and 1900, and comprised of ≈5.1 billion tokens.
Kasra Hosseini   +3 more
doaj   +1 more source

Table Search Using a Deep Contextualized Language Model

open access: yes, 2020
Pretrained contextualized language models such as BERT have achieved impressive results on various natural language processing benchmarks. Benefiting from multiple pretraining tasks and large scale training corpora, pretrained models can capture complex ...
Auer Sören   +5 more
core   +1 more source

CEDR: Contextualized Embeddings for Document Ranking [PDF]

open access: yes, 2019
Although considerable attention has been given to neural ranking architectures recently, far less attention has been paid to the term representations that are used as input to these models.
Cohan, Arman   +3 more
core   +3 more sources

I-BERT: Integer-only BERT Quantization

open access: yes, 2021
Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive efficient inference at the edge, and even at the data center.
Kim, Sehoon   +4 more
openaire   +2 more sources

How to Fine-Tune BERT for Text Classification?

open access: yes, 2020
Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in ...
Huang, Xuanjing   +3 more
core   +1 more source

Look at the First Sentence: Position Bias in Question Answering

open access: yes, 2020
Many extractive question answering models are trained to predict start and end positions of answers. The choice of predicting answers as positions is mainly due to its simplicity and effectiveness. In this study, we hypothesize that when the distribution
Kang, Jaewoo   +4 more
core   +1 more source

Home - About - Disclaimer - Privacy