Results 1 to 10 of about 850 (123)
Linguistics as Metaphor in Organizational Regularization and Decay
Marc S. Mentzer
openalex +3 more sources
Post-Authorship Attribution Using Regularized Deep Neural Network
Post-authorship attribution is a scientific process of using stylometric features to identify the genuine writer of an online text snippet such as an email, blog, forum post, or chat log. It has useful applications in manifold domains, for instance, in a
Abiodun Modupe +3 more
doaj +1 more source
Sentence Analogies: Linguistic Regularities in Sentence Embeddings [PDF]
While important properties of word vector representations have been studied extensively, far less is known about the properties of sentence vector representations. Word vectors are often evaluated by assessing to what degree they exhibit regularities with regard to relationships of the sort considered in word analogies. In this paper, we investigate to
Xunjie Zhu, Gerard de Melo
openaire +1 more source
Linguistic Regularities of LOLspeak [PDF]
The influence of the Internet on language is an unprecedented phenomenon. Maybe only the emergence of the systems of writing and the invention of printing can be claimed to have contributed more to the way language is used and perceived. Numerous publications have been devoted to the new forms of language thriving in the cyberspace.
Bury, Beata, Wojtaszek, Adam
openaire +3 more sources
Linguistically Regularized LSTM for Sentiment Classification [PDF]
Sentiment understanding has been a long-term goal of AI in the past decades. This paper deals with sentence-level sentiment classification. Though a variety of neural network models have been proposed very recently, however, previous models either depend on expensive phrase-level annotation, whose performance drops substantially when trained with only ...
Qian, Qiao +3 more
openaire +2 more sources
Robust Reading Comprehension With Linguistic Constraints via Posterior Regularization [PDF]
In spite of great advancements of machine reading comprehension (RC), existing RC models are still vulnerable and not robust to different types of adversarial examples. Neural models over-confidently predict wrong answers to semantic different adversarial examples, while over-sensitively predict wrong answers to semantic equivalent adversarial examples.
Mantong Zhou, Minlie Huang, Xiaoyan Zhu
openaire +2 more sources
Improving statistical parsing by linguistic regularization [PDF]
Statistically-based parsers for large corpora, in particular the Penn Tree Bank (PTB), typically have not used all the linguistic information encoded in the annotated trees on which they are trained. In particular, they have not in general used information that records the effects of derivations, such as empty categories and the representation of ...
Berwick, Robert C. +1 more
openaire +2 more sources
Limited-Resource Adapters Are Regularizers, Not Linguists [PDF]
Cross-lingual transfer from related high-resource languages is a well-established strategy to enhance low-resource language technologies. Prior work has shown that adapters show promise for, e.g., improving low-resource machine translation (MT). In this work, we investigate an adapter souping method combined with cross-attention fine-tuning of a pre ...
Fekete, Marcell +6 more
+5 more sources
Implicit learning of non-linguistic and linguistic regularities in children with dyslexia
One of the hallmarks of dyslexia is the failure to automatise written patterns despite repeated exposure to print. Although many explanations have been proposed to explain this problem, researchers have recently begun to explore the possibility that an underlying implicit learning deficit may play a role in dyslexia.
Nigro, Lucíana +3 more
openaire +4 more sources
Linguistic Regularities in Sparse and Explicit Word Representations [PDF]
Recent work has shown that neuralembedded word representations capture many relational similarities, which can be recovered by means of vector arithmetic in the embedded space. We show that Mikolov et al.’s method of first adding and subtracting word vectors, and then searching for a word similar to the result, is equivalent to searching for a word ...
Omer Levy, Yoav Goldberg
openaire +1 more source

