Results 261 to 270 of about 116,632 (367)

People have different expectations for their own versus others' use of AI‐mediated communication tools

open access: yesBritish Journal of Psychology, EarlyView.
Abstract Artificial intelligence (AI) can enhance human communication, for example, by improving the quality of our writing, voice or appearance. However, AI mediated communication also has risks—it may increase deception, compromise authenticity or yield widespread mistrust. As a result, both policymakers and technology firms are developing approaches
Zoe A. Purcell   +4 more
wiley   +1 more source

Artificial intelligence chatbots mimic human collective behaviour

open access: yesBritish Journal of Psychology, EarlyView.
Abstract Artificial Intelligence (AI) chatbots, such as ChatGPT, have been shown to mimic individual human behaviour in a wide range of psychological and economic tasks. Do groups of AI chatbots also mimic collective behaviour? If so, artificial societies of AI chatbots may aid social scientific research by simulating human collectives.
James K. He   +3 more
wiley   +1 more source

Demystifying the mist: Why do individuals hesitate to accept AI educational services?

open access: yesBritish Journal of Psychology, EarlyView.
Abstract Rapid advances in AI technology are fuelling the proliferation of AI applications across industries, including educational services. With the allure of intelligent tutoring, individuals now face the choice of their educational approach—either parental engagement or utilizing AI educational services. This research employs an experimental design
Aiping Shao   +4 more
wiley   +1 more source

Chatbot Reliability in Managing Thoracic Surgical Clinical Scenarios [PDF]

open access: bronze
Joseph Platz   +3 more
openalex   +1 more source

Evaluating Creative Output With Generative Artificial Intelligence: Comparing GPT Models and Human Experts in Idea Evaluation

open access: yesCreativity and Innovation Management, Volume 34, Issue 4, Page 991-1012, December 2025.
ABSTRACT Traditional techniques for evaluating creative outcomes are typically based on evaluations made by human experts. These methods suffer from challenges such as subjectivity, biases, limited availability, ‘crowding’, and high transaction costs. We propose that large language models (LLMs) can be used to overcome these shortcomings.
Theresa Kranzle, Katelyn Sharratt
wiley   +1 more source

Home - About - Disclaimer - Privacy