Results 51 to 60 of about 9,135 (197)

New Insights Into Lakota Syntax: The Encoding of Arguments and the Number of Verbal Affixes

open access: yesStudia Linguistica, Volume 80, Issue 1, April 2026.
ABSTRACT This paper examines the morphosyntax of transitive constructions in Lakota, with particular emphasis being placed on the encoding of arguments. The analysis of argument marking through verbal affixes in Lakota transitive constructions raises two main questions: the existence or non‐existence of the zero marker for the third person singular and
Avelino Corral Esteban
wiley   +1 more source

An ELECTRA-Based Model for Neural Coreference Resolution

open access: yesIEEE Access, 2022
In last years, coreference resolution has received a sensibly performance boost exploiting different pre-trained Neural Language Models, from BERT to SpanBERT until Longformer.
Francesco Gargiulo   +6 more
doaj   +1 more source

SciLitMiner: An Intelligent System for Scientific Literature Mining and Knowledge Discovery

open access: yesAdvanced Intelligent Systems, Volume 8, Issue 3, March 2026.
SciLitMiner is an intelligent system that federately ingests scientific literature, filters it using advanced information retrieval methods, and applies retrieval‐augmented generation tailored to scientific domains. Demonstrated on creep deformation in γ‐TiAl alloys, SciLitMiner provides a controlled workflow for systematic knowledge discovery and ...
Vipul Gupta   +3 more
wiley   +1 more source

SUC-CORE: A Balanced Corpus Annotated with Noun Phrase Coreference

open access: yesNorthern European Journal of Language Technology, 2013
This paper describes SUC-CORE, a subset of the Stockholm Ume°a Corpus and the Swedish Treebank annotated with noun phrase coreference. While most coreference annotated corpora consist of exts of similar types within related domains, SUC-CORE consists of
Kristina Nilsson Björkenstam
doaj   +1 more source

Coreference Resolution Using Neural MCDM and Fuzzy Weighting Technique

open access: yesInternational Journal of Computational Intelligence Systems, 2020
Coreference resolution has been an active field of research in the past several decades and plays a vital role in many areas such as information extraction, document summarization, machine translation, and question answering systems.
Samira Hourali   +2 more
doaj   +1 more source

Coreference-Based Summarization and Question Answering: a Case for High Precision Anaphor Resolution [PDF]

open access: yes, 2005
Approaches to Text Summarization and Question Answering are known to benefit from the availability of coreference information. Based on an analysis of its contributions, a more detailed look at coreference processing for these applications will be ...
Stuckardt, Roland
core  

Dynamic Entity Representations in Neural Language Models

open access: yes, 2017
Understanding a long document requires tracking how entities are introduced and evolve over time. We present a new type of language model, EntityNLM, that can explicitly model entities, dynamically update their representations, and contextually generate ...
Choi, Yejin   +4 more
core   +1 more source

Marmara Turkish Coreference Corpus and Coreference Resolution Baseline

open access: yes, 2017
We describe the Marmara Turkish Coreference Corpus, which is an annotation of the whole METU-Sabanci Turkish Treebank with mentions and coreference chains. Collecting eight or more independent annotations for each document allowed for fully automatic adjudication. We provide a baseline system for Turkish mention detection and coreference resolution and
Sch��ller, Peter   +6 more
openaire   +3 more sources

Large language models for bioinformatics

open access: yesQuantitative Biology, Volume 14, Issue 1, March 2026.
Abstract With the rapid advancements in large language model technology and the emergence of bioinformatics‐specific language models (BioLMs), there is a growing need for a comprehensive analysis of the current landscape, computational characteristics, and diverse applications.
Wei Ruan   +54 more
wiley   +1 more source

Evidence Against Syntactic Encapsulation in Large Language Models

open access: yesCognitive Science, Volume 50, Issue 3, March 2026.
Abstract Transformer‐based large language models (LLMs) have recently demonstrated exceptional performance in a variety of linguistic tasks. LLMs primarily combine information across words in a sentence using the attention mechanism, implemented by “attention heads:” these components assign numerical weights linking different words in the input to one ...
Thomas A. McGee   +2 more
wiley   +1 more source

Home - About - Disclaimer - Privacy