Results 1 to 10 of about 850 (123)

Post-Authorship Attribution Using Regularized Deep Neural Network

open access: yesApplied Sciences, 2022
Post-authorship attribution is a scientific process of using stylometric features to identify the genuine writer of an online text snippet such as an email, blog, forum post, or chat log. It has useful applications in manifold domains, for instance, in a
Abiodun Modupe   +3 more
doaj   +1 more source

Sentence Analogies: Linguistic Regularities in Sentence Embeddings [PDF]

open access: yesProceedings of the 28th International Conference on Computational Linguistics, 2020
While important properties of word vector representations have been studied extensively, far less is known about the properties of sentence vector representations. Word vectors are often evaluated by assessing to what degree they exhibit regularities with regard to relationships of the sort considered in word analogies. In this paper, we investigate to
Xunjie Zhu, Gerard de Melo
openaire   +1 more source

Linguistic Regularities of LOLspeak [PDF]

open access: yesSino-US English Teaching, 2017
The influence of the Internet on language is an unprecedented phenomenon. Maybe only the emergence of the systems of writing and the invention of printing can be claimed to have contributed more to the way language is used and perceived. Numerous publications have been devoted to the new forms of language thriving in the cyberspace.
Bury, Beata, Wojtaszek, Adam
openaire   +3 more sources

Linguistically Regularized LSTM for Sentiment Classification [PDF]

open access: yesProceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2017
Sentiment understanding has been a long-term goal of AI in the past decades. This paper deals with sentence-level sentiment classification. Though a variety of neural network models have been proposed very recently, however, previous models either depend on expensive phrase-level annotation, whose performance drops substantially when trained with only ...
Qian, Qiao   +3 more
openaire   +2 more sources

Robust Reading Comprehension With Linguistic Constraints via Posterior Regularization [PDF]

open access: yesIEEE/ACM Transactions on Audio, Speech, and Language Processing, 2020
In spite of great advancements of machine reading comprehension (RC), existing RC models are still vulnerable and not robust to different types of adversarial examples. Neural models over-confidently predict wrong answers to semantic different adversarial examples, while over-sensitively predict wrong answers to semantic equivalent adversarial examples.
Mantong Zhou, Minlie Huang, Xiaoyan Zhu
openaire   +2 more sources

Improving statistical parsing by linguistic regularization [PDF]

open access: yes2010 10th International Conference on Intelligent Systems Design and Applications, 2010
Statistically-based parsers for large corpora, in particular the Penn Tree Bank (PTB), typically have not used all the linguistic information encoded in the annotated trees on which they are trained. In particular, they have not in general used information that records the effects of derivations, such as empty categories and the representation of ...
Berwick, Robert C.   +1 more
openaire   +2 more sources

Limited-Resource Adapters Are Regularizers, Not Linguists [PDF]

open access: yesProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Cross-lingual transfer from related high-resource languages is a well-established strategy to enhance low-resource language technologies. Prior work has shown that adapters show promise for, e.g., improving low-resource machine translation (MT). In this work, we investigate an adapter souping method combined with cross-attention fine-tuning of a pre ...
Fekete, Marcell   +6 more
  +5 more sources

Implicit learning of non-linguistic and linguistic regularities in children with dyslexia

open access: yesAnnals of Dyslexia, 2015
One of the hallmarks of dyslexia is the failure to automatise written patterns despite repeated exposure to print. Although many explanations have been proposed to explain this problem, researchers have recently begun to explore the possibility that an underlying implicit learning deficit may play a role in dyslexia.
Nigro, Lucíana   +3 more
openaire   +4 more sources

Linguistic Regularities in Sparse and Explicit Word Representations [PDF]

open access: yesProceedings of the Eighteenth Conference on Computational Natural Language Learning, 2014
Recent work has shown that neuralembedded word representations capture many relational similarities, which can be recovered by means of vector arithmetic in the embedded space. We show that Mikolov et al.’s method of first adding and subtracting word vectors, and then searching for a word similar to the result, is equivalent to searching for a word ...
Omer Levy, Yoav Goldberg
openaire   +1 more source

Home - About - Disclaimer - Privacy