Results 271 to 280 of about 6,965,125 (303)
Some of the next articles are maybe not open access.
HARRIS, FIRTH, AND DISTRIBUTIONAL SEMANTICS
Lege Artis. Language yesterday, today, tomorrowThe direct influence of Zellig Harris and John R. Firth on present-day distributional corpus semantics is so limited that attributing a direct foundational role to them (as is often done in distributional semantic papers) is somewhat misleading ...
Dirk Geeraerts
semanticscholar +1 more source
Are LLMs Models of Distributional Semantics? A Case Study on Quantifiers
arXiv.orgDistributional semantics is the linguistic theory that a word's meaning can be derived from its distribution in natural language (i.e., its use). Language models are commonly viewed as an implementation of distributional semantics, as they are optimized ...
Zhang Enyan +4 more
semanticscholar +1 more source
From Semantics to Spatial Distribution
2000This work studies the notion of locality in the context of process specification. It relates naturally with other works where information about the localities of a program is obtained information from its description written down in a programming language.
Sierra abbate, L.R. +2 more
openaire +3 more sources
Composition in Distributional Semantics
Language and Linguistics Compass, 2013Abstract Distributional Semantic Models, which automatically induce word meaning representations from naturally occurring textual data, are a success story of computational linguistics. Recently, there has been much interest in whether such models, endowed with a compositional component, can also successfully ...
openaire +2 more sources
Semantic Oriented Document Clustering Using Distribution Semantics
Proceedings of the 2nd International Conference on Information System and Data Mining, 2018The exponential growth of electronic form of textual documents in both public and proprietary storage force researchers to find way to efficiently extract meaningful, actionable information from these documents. Document clustering has find its niche in this area.
Umar Ali Khan, Muhammad Rafi
openaire +1 more source
World Wide Web, 2011
Classifications are trees where links between nodes codify the fact that a node lower in the hierarchy describes a topic (and contains documents about this topic) which is more specific than the topic of the node one level above. In turn, multiple classifications can be connected by semantic links which represent mappings among them and which can be ...
Giunchiglia, Fausto +2 more
openaire +2 more sources
Classifications are trees where links between nodes codify the fact that a node lower in the hierarchy describes a topic (and contains documents about this topic) which is more specific than the topic of the node one level above. In turn, multiple classifications can be connected by semantic links which represent mappings among them and which can be ...
Giunchiglia, Fausto +2 more
openaire +2 more sources
Distributional Compositional Semantics and Text Similarity
2012 IEEE Sixth International Conference on Semantic Computing, 2012In this paper, an approach for semantic compositionbased on space projection operations over basic geometriclexical representations is proposed. Syntactic bi-grams arehere projected in the so called Support Subspace, aimed atemphasizing the semantic features shared by the compoundword.
CROCE, DANILO +3 more
openaire +1 more source
Distributivity in Formal Semantics
Annual Review of Linguistics, 2019Distributivity in natural language occurs in sentences such as John and Mary (each) took a deep breath, when a predicate that is combined with a plurality-denoting expression is understood as holding of each of the members of that plurality. Language provides ways to express distributivity overtly, with words such as English each, but also covertly ...
openaire +1 more source
Supervised Distributional Semantic Relatedness
2012Distributional measures of semantic relatedness determine word similarity based on how frequently a pair of words appear in the same contexts. A typical method is to construct a word-context matrix, then re-weight it using some measure of association, and finally take the vector distance as a measure of similarity. This has largely been an unsupervised
Alistair Kennedy, Stan Szpakowicz
openaire +1 more source

