The Arabic Parallel Gender Corpus 2.0: Extensions and Analyses [PDF]
Gender bias in natural language processing (NLP) applications, particularly machine translation, has been receiving increasing attention. Much of the research on this issue has focused on mitigating gender bias in English NLP models and systems. Addressing the problem in poorly resourced, and/or morphologically rich languages has lagged behind, largely
arxiv
Culture and generalized inattentional blindness [PDF]
A recent mathematical treatment of Baars' Global Workspace consciousness model, much in the spirit of Dretske's communication theory analysis of high level mental function, is used to study the effects of embedding cultural heritage on a generalized form
Wallace, Rodrick
core
Assessing the influence of attractor-verb distance on grammatical agreement in humans and language models [PDF]
Subject-verb agreement in the presence of an attractor noun located between the main noun and the verb elicits complex behavior: judgments of grammaticality are modulated by the grammatical features of the attractor. For example, in the sentence "The girl near the boys likes climbing", the attractor (boys) disagrees in grammatical number with the verb (
arxiv
Iltifat, Grammatical Person Shift and Cohesion in the Holy Quran [PDF]
While discourse analyses go back to the second half of 1950s related researches in the rhetoric field represent initiative interests centuries before by Islamic rhetoricians Among the topics Iltifat as the grammatical person shift in one ...
Dr. Keivan Zahedi+2 more
core +1 more source
Generalized inattentional blindness from a Global Workspace perspective [PDF]
We apply Baars' Global Workspace model of consciousness to inattentional blindness, using the groupoid network method of Stewart et al. to explore modular structures defined by information measures associated with cognitive process.
Wallace, Rodrick
core
Attentive Tensor Product Learning
This paper proposes a new architecture - Attentive Tensor Product Learning (ATPL) - to represent grammatical structures in deep learning models.
Deng, Li+4 more
core +1 more source
Structural Supervision Improves Learning of Non-Local Grammatical Dependencies [PDF]
State-of-the-art LSTM language models trained on large corpora learn sequential contingencies in impressive detail and have been shown to acquire a number of non-local grammatical dependencies with some success. Here we investigate whether supervision with hierarchical structure enhances learning of a range of grammatical dependencies, a question that ...
arxiv
Attribution Analysis of Grammatical Dependencies in LSTMs [PDF]
LSTM language models have been shown to capture syntax-sensitive grammatical dependencies such as subject-verb agreement with a high degree of accuracy (Linzen et al., 2016, inter alia). However, questions remain regarding whether they do so using spurious correlations, or whether they are truly able to match verbs with their subjects.
arxiv
On the scope of the referential hierarchy in the typology of grammatical relations [PDF]
In the late seventies, Bernard Comrie was one of the first linguists to explore the effects of the referential hierarchy (RH) on the distribution of grammatical relations (GRs).
Bickel, Balthasar
core
fMRI Evidence for the Involvement of the Procedural Memory System in Morphological Processing of a Second Language [PDF]
Behavioural evidence suggests that English regular past tense forms are automatically decomposed into their stem and affix (played = play+ed) based on an implicit linguistic rule, which does not apply to the idiosyncratically formed irregular forms ...
Johnstone, Tom+2 more
core +2 more sources