Results 1 to 10 of about 472,643 (327)
Building on previous research related to information literacy and learning with Wikipedia, this article interprets Wikipedia editing practices as fulfilling the Association of College and Research Libraries’ (ACRL) Framework for Information Literacy in ...
Zachary J. McDowell, Matthew A. Vetter
doaj +2 more sources
Wikipedia citations: A comprehensive data set of citations with identifiers extracted from English Wikipedia [PDF]
Wikipedia’s content is based on reliable and published sources. To this date, relatively little is known about what sources Wikipedia relies on, in part because extracting citations and identifying cited sources is challenging.
Harshdeep Singh +2 more
doaj +2 more sources
Situating Wikipedia as a health information resource in various contexts: A scoping review.
BACKGROUND:Wikipedia's health content is the most frequently visited resource for health information on the internet. While the literature provides strong evidence for its high usage, a comprehensive literature review of Wikipedia's role within the ...
Denise A Smith
doaj +2 more sources
The task of this article is to analyze the political economy of Wikipedia. We discuss the specifics of Wikipedia’s mode of production. The basic principles of what we call the info-communist mode of production will be presented. Our analysis is grounded in Marxist philosophy and Marxist political economy, and is connected to the current discourse ...
Sylvain Firer-Blaess, Christian Fuchs
semanticscholar +3 more sources
With or without Wikipedia? Integrating Wikipedia into the Teaching Process in Estonian General Education Schools [PDF]
Today’s education has been shaped by the rapid development of digital technologies and easy accessibility to a large number of electronic sources. This has instigated a genuine need to change current teaching attitudes and practices.
Riina Reinsalu +4 more
doaj +2 more sources
Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models [PDF]
We study how to apply large language models to write grounded and organized long-form articles from scratch, with comparable breadth and depth to Wikipedia pages.
Yijia Shao +5 more
openalex +2 more sources
WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning [PDF]
The milestone improvements brought about by deep representation learning and pre-training techniques have led to large performance gains across downstream NLP, IR and Vision tasks.
Krishna Srinivasan +4 more
semanticscholar +1 more source
WiCE: Real-World Entailment for Claims in Wikipedia [PDF]
Textual entailment models are increasingly applied in settings like fact-checking, presupposition verification in question answering, or summary evaluation. However, these represent a significant domain shift from existing entailment datasets, and models
Ryo Kamoi +3 more
semanticscholar +1 more source
Open-domain Visual Entity Recognition: Towards Recognizing Millions of Wikipedia Entities [PDF]
Large-scale multi-modal pre-training models such as CLIP [30] and PaLI [8] exhibit strong generalization on various visual domains and tasks. However, existing image classification benchmarks often evaluate recognition on a specific domain (e.g., outdoor
Hexiang Hu +7 more
semanticscholar +1 more source

