Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems [PDF]
Recent advancements in Large Language Models empower them to follow freeform instructions, including imitating generic or specific demographic personas in conversations. We define generic personas to represent demographic groups, such as "an Asian person", whereas specific personas may take the form of specific popular Asian names like "Yumi".
Yixin Wan+4 more
arxiv +3 more sources
Improving Personality Consistency in Conversation by Persona Extending [PDF]
Endowing chatbots with a consistent personality plays a vital role for agents to deliver human-like interactions. However, existing personalized approaches commonly generate responses in light of static predefined personas depicted with textual description, which may severely restrict the interactivity of human and the chatbot, especially when the ...
Yifan Liu+5 more
arxiv +3 more sources
MPCHAT: Towards Multimodal Persona-Grounded Conversation [PDF]
In order to build self-consistent personalized dialogue agents, previous research has mostly focused on textual persona that delivers personal facts or personalities. However, to fully describe the multi-faceted nature of persona, image modality can help better reveal the speaker's personal characteristics and experiences in episodic memory (Rubin et ...
Jaewoo Ahn+3 more
arxiv +3 more sources
Developing Persona Analytics Towards Persona Science
Much of the reported work on personas suffers from the lack of empirical evidence. To address this issue, we introduce Persona Analytics (PA), a system that tracks how users interact with data-driven personas. PA captures users’ mouse and gaze behavior to measure users’ interaction with algorithmically generated personas and use of system features for ...
Jung, Soon-Gyo+3 more
openaire +3 more sources
Quantifying the Persona Effect in LLM Simulations [PDF]
Large language models (LLMs) have shown remarkable promise in simulating human language and behavior. This study investigates how integrating persona variables-demographic, social, and behavioral factors-impacts LLMs' ability to simulate diverse perspectives. We find that persona variables account for <10% variance in annotations in existing subjective
Tiancheng Hu, Nigel Collier
arxiv +2 more sources
Like hiking? You probably enjoy nature: Persona-grounded Dialog with Commonsense Expansions [PDF]
Existing persona-grounded dialog models often fail to capture simple implications of given persona descriptions, something which humans are able to do seamlessly. For example, state-of-the-art models cannot infer that interest in hiking might imply love for nature or longing for a break.
Bodhisattwa Prasad Majumder+3 more
arxiv +3 more sources
Toxicity in ChatGPT: Analyzing Persona-assigned Language Models [PDF]
Large language models (LLMs) have shown incredible capabilities and transcended the natural language processing (NLP) community, with adoption throughout many services like healthcare, therapy, education, and customer service.
A. Deshpande+4 more
semanticscholar +1 more source
Enhancing Personalized Dialogue Generation with Contrastive Latent Variables: Combining Sparse and Dense Persona [PDF]
The personalized dialogue explores the consistent relationship between dialogue generation and personality. Existing personalized dialogue agents model persona profiles from three resources: sparse or dense persona descriptions and dialogue histories ...
Yihong Tang+6 more
semanticscholar +1 more source
PeaCoK: Persona Commonsense Knowledge for Consistent and Engaging Narratives [PDF]
Sustaining coherent and engaging narratives requires dialogue or storytelling agents to understandhow the personas of speakers or listeners ground the narrative. Specifically, these agents must infer personas of their listeners to produce statements that
Silin Gao+7 more
semanticscholar +1 more source
Long Time No See! Open-Domain Conversation with Long-Term Persona Memory [PDF]
Most of the open-domain dialogue models tend to perform poorly in the setting of long-term human-bot conversations. The possible reason is that they lack the capability of understanding and memorizing long-term dialogue history information.
Xinchao Xu+6 more
semanticscholar +1 more source