Results 51 to 60 of about 14,560 (254)

TakeLab at SemEval-2017 Task 6: #RankingHumorIn4Pages [PDF]

open access: yesProceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), 2017
This paper describes our system for humor ranking in tweets within the SemEval 2017 Task 6: #HashtagWars (6A and 6B). For both subtasks, we use an off-the-shelf gradient boosting model built on a rich set of features, handcrafted to provide the model with the external knowledge needed to better predict the humor in the text.
Ivan Mršić   +5 more
openaire   +3 more sources

Coding energy knowledge in constructed responses with explainable NLP models

open access: yesJournal of Computer Assisted Learning, Volume 39, Issue 3, Page 767-786, June 2023., 2023
Abstract Background Formative assessments are needed to enable monitoring how student knowledge develops throughout a unit. Constructed response items which require learners to formulate their own free‐text responses are well suited for testing their active knowledge.
Sebastian Gombert   +9 more
wiley   +1 more source

Using Tsetlin Machine to discover interpretable rules in natural language processing applications

open access: yesExpert Systems, Volume 40, Issue 4, May 2023., 2023
Abstract Tsetlin Machines (TM) use finite state machines for learning and propositional logic to represent patterns. The resulting pattern recognition approach captures information in the form of conjunctive clauses, thus facilitating human interpretation.
Rupsa Saha   +2 more
wiley   +1 more source

Analysis of community question‐answering issues via machine learning and deep learning: State‐of‐the‐art review

open access: yesCAAI Transactions on Intelligence Technology, Volume 8, Issue 1, Page 95-117, March 2023., 2023
Abstract Over the last couple of decades, community question‐answering sites (CQAs) have been a topic of much academic interest. Scholars have often leveraged traditional machine learning (ML) and deep learning (DL) to explore the ever‐growing volume of content that CQAs engender.
Pradeep Kumar Roy   +4 more
wiley   +1 more source

SemEval-2018 Task 10: Capturing Discriminative Attributes [PDF]

open access: yesProceedings of The 12th International Workshop on Semantic Evaluation, 2018
This paper describes the SemEval 2018 Task 10 on Capturing Discriminative Attributes. Participants were asked to identify whether an attribute could help discriminate between two concepts. For example, a successful system should determine that ‘urine’ is a discriminating feature in the word pair ‘kidney’, ‘bone’.
Alicia Krebs   +2 more
openaire   +4 more sources

JokeMeter at SemEval-2020 Task 7: Convolutional Humor [PDF]

open access: yesProceedings of the Fourteenth Workshop on Semantic Evaluation, 2020
This paper describes our system that was designed for Humor evaluation within the SemEval-2020 Task 7. The system is based on convolutional neural network architecture. We investigate the system on the official dataset, and we provide more insight to model itself to see how the learned inner features look.
Martin Docekal   +3 more
openaire   +3 more sources

Graph Convolutional Network for Word Sense Disambiguation

open access: yesDiscrete Dynamics in Nature and Society, 2021
Word sense disambiguation (WSD) is an important research topic in natural language processing, which is widely applied to text classification, machine translation, and information retrieval.
Chun-Xiang Zhang   +3 more
doaj   +1 more source

SemEval-2016 Task 3: Community Question Answering [PDF]

open access: yesProceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), 2016
This paper describes the SemEval--2016 Task 3 on Community Question Answering, which we offered in English and Arabic. For English, we had three subtasks: Question--Comment Similarity (subtask A), Question--Question Similarity (B), and Question--External Comment Similarity (C).
Nakov, Preslav   +7 more
openaire   +3 more sources

BUT-FIT at SemEval-2020 Task 4: Multilingual Commonsense [PDF]

open access: yesProceedings of the Fourteenth Workshop on Semantic Evaluation, 2020
This paper describes work of the BUT-FIT's team at SemEval 2020 Task 4 - Commonsense Validation and Explanation. We participated in all three subtasks. In subtasks A and B, our submissions are based on pretrained language representation models (namely ALBERT) and data augmentation.
Pavel Smrz   +3 more
openaire   +3 more sources

Home - About - Disclaimer - Privacy