Optimization techniques for sentiment analysis based on LLM (GPT-3) [PDF]
With the rapid development of natural language processing (NLP) technology, large-scale pre-trained language models such as GPT-3 have become a popular research object in NLP field. This paper aims to explore sentiment analysis optimization techniques based on large pre-trained language models such as GPT-3 to improve model performance and effect and ...
Zhan, Tong +4 more
openaire +3 more sources
Conversational AI and equity through assessing GPT-3's communication with diverse social groups on contentious topics. [PDF]
Autoregressive language models, which use deep learning to produce human-like texts, have surged in prevalence. Despite advances in these models, concerns arise about their equity across diverse populations. While AI fairness is discussed widely, metrics
Chen K, Shao A, Burapacheep J, Li Y.
europepmc +2 more sources
An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA [PDF]
Knowledge-based visual question answering (VQA) involves answering questions that require external knowledge not present in the image. Existing methods first retrieve knowledge from external resources, then reason over the selected knowledge, the input ...
Zhengyuan Yang +6 more
semanticscholar +1 more source
A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models [PDF]
GPT series models, such as GPT-3, CodeX, InstructGPT, ChatGPT, and so on, have gained considerable attention due to their exceptional natural language processing capabilities.
Junjie Ye +14 more
semanticscholar +1 more source
News Summarization and Evaluation in the Era of GPT-3 [PDF]
The recent success of prompting large language models like GPT-3 has led to a paradigm shift in NLP research. In this paper, we study its impact on text summarization, focusing on the classic benchmark domain of news summarization.
Tanya Goyal +2 more
semanticscholar +1 more source
A Survey of GPT-3 Family Large Language Models Including ChatGPT and GPT-4 [PDF]
Large language models (LLMs) are a special class of pretrained language models obtained by scaling model size, pretraining corpus and computation. LLMs, because of their large size and pretraining on large volumes of text data, exhibit special abilities ...
Katikapalli Subramanyam Kalyan
semanticscholar +1 more source
Want To Reduce Labeling Cost? GPT-3 Can Help [PDF]
Data annotation is a time-consuming and labor-intensive process for many NLP tasks. Although there exist various methods to produce pseudo data labels, they are often task-specific and require a decent amount of labeled data to start with.
Shuohang Wang +4 more
semanticscholar +1 more source
Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding [PDF]
Qualitative analysis of textual contents unpacks rich and valuable information by assigning labels to the data. However, this process is often labor-intensive, particularly when working with large datasets. While recent AI-based tools demonstrate utility,
Ziang Xiao +4 more
semanticscholar +1 more source
Chat2VIS: Generating Data Visualizations via Natural Language Using ChatGPT, Codex and GPT-3 Large Language Models [PDF]
The field of data visualisation has long aimed to devise solutions for generating visualisations directly from natural language text. Research in Natural Language Interfaces (NLIs) has contributed towards the development of such techniques.
Paula Maddigan, Teo Sušnjak
semanticscholar +1 more source
Thinking about the language models and what we can do with them
Massive developments in technology, ICT, and artificial intelligence have been witnessed in recent years, with various projects emerging showing the apparent superiority of artificial intelligence over human intelligence, such as Deep Blue, AlphaGo ...
Horváth Roman
doaj +1 more source

