Results 11 to 20 of about 5,810,256 (329)

DecAF: Joint Decoding of Answers and Logical Forms for Question Answering over Knowledge Bases [PDF]

open access: greenInternational Conference on Learning Representations, 2022
Question answering over knowledge bases (KBs) aims to answer natural language questions with factual information such as entities and relations in KBs. Previous methods either generate logical forms that can be executed over KBs to obtain final answers ...
Donghan Yu   +9 more
openalex   +3 more sources

Structured Prompt Interrogation and Recursive Extraction of Semantics (SPIRES): a method for populating knowledge bases using zero-shot learning [PDF]

open access: yesBioinform., 2023
Motivation Creating knowledge bases and ontologies is a time consuming task that relies on manual curation. AI/NLP approaches can assist expert curators in populating these knowledge bases, but current approaches rely on extensive training data, and are ...
J. Caufield   +11 more
semanticscholar   +1 more source

Empowering large chemical knowledge bases for exposomics: PubChemLite meets MetFrag. [PDF]

open access: yesJ Cheminform, 2021
Compound (or chemical) databases are an invaluable resource for many scientific disciplines. Exposomics researchers need to find and identify relevant chemicals that cover the entirety of potential (chemical and other) exposures over entire lifetimes ...
Schymanski EL   +5 more
europepmc   +2 more sources

Time-Aware Language Models as Temporal Knowledge Bases [PDF]

open access: yesTransactions of the Association for Computational Linguistics, 2021
Many facts come with an expiration date, from the name of the President to the basketball team Lebron James plays for. However, most language models (LMs) are trained on snapshots of data collected at a specific moment in time.
Bhuwan Dhingra   +5 more
semanticscholar   +1 more source

KnowledGPT: Enhancing Large Language Models with Retrieval and Storage Access on Knowledge Bases [PDF]

open access: yesarXiv.org, 2023
Large language models (LLMs) have demonstrated impressive impact in the field of natural language processing, but they still struggle with several issues regarding, such as completeness, timeliness, faithfulness and adaptability.
Xintao Wang   +7 more
semanticscholar   +1 more source

Case-based Reasoning for Natural Language Queries over Knowledge Bases [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2021
It is often challenging to solve a complex problem from scratch, but much easier if we can access other similar problems with their solutions — a paradigm known as case-based reasoning (CBR).
Rajarshi Das   +8 more
semanticscholar   +1 more source

A Review on Language Models as Knowledge Bases [PDF]

open access: yesarXiv.org, 2022
Recently, there has been a surge of interest in the NLP community on the use of pretrained Language Models (LMs) as Knowledge Bases (KBs). Researchers have shown that LMs trained on a sufficiently large (web) corpus will encode a significant amount of ...
Badr AlKhamissi   +4 more
semanticscholar   +1 more source

Knowledge-tuning Large Language Models with Structured Medical Knowledge Bases for Trustworthy Response Generation in Chinese [PDF]

open access: yesACM Transactions on Knowledge Discovery from Data, 2023
Large Language Models (LLMs) have demonstrated remarkable success in diverse natural language processing (NLP) tasks in general domains. However, LLMs sometimes generate responses with the hallucination about medical facts due to limited domain knowledge.
Hao Wang   +11 more
semanticscholar   +1 more source

Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2021
Previous literatures show that pre-trained masked language models (MLMs) such as BERT can achieve competitive factual knowledge extraction performance on some datasets, indicating that MLMs can potentially be a reliable knowledge source.
Boxi Cao   +7 more
semanticscholar   +1 more source

Can Language Models be Biomedical Knowledge Bases? [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2021
Pre-trained language models (LMs) have become ubiquitous in solving various natural language processing (NLP) tasks. There has been increasing interest in what knowledge these LMs contain and how we can extract that knowledge, treating LMs as knowledge ...
Mujeen Sung   +5 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy