Results 221 to 230 of about 235,742 (282)

What if Adam Smith Debated an AI Economist: A Thought Experiment on Markets, Ethics, and the Invisible Hand

open access: yesBusiness Ethics, the Environment &Responsibility, EarlyView.
ABSTRACT Can AI‐driven capitalism sustain the moral preconditions of market order? We stage a dialogue between Adam Smith and a steel‐manned “EconAI” to test four Moral‐Market‐Fitness criteria: trustworthiness, fairness, non‐domination, and contestability, across 11 dilemmas.
Alexandra‐Codruța Bîzoi   +1 more
wiley   +1 more source

P4s Are Either Unhelpful or Unnecessary. Proposing a Better AI‐Powered Solution to Predict Patients' Preferences

open access: yesBioethics, EarlyView.
ABSTRACT The Personalized Patient Preference Predictor (P4) has been proposed as an AI tool to aid surrogate decision‐making when incapacitated patients lack advance directives. Unlike population‐level Patient Preference Predictors (PPPs), which infer preferences from demographic correlations, P4s fine‐tune large language models (LLMs) on a patient's ...
Beatrice Marchegiani
wiley   +1 more source

Enhancing creative writing with robot–LLM integration: The interplay of embodiment, AI creativity and user engagement

open access: yesBritish Journal of Educational Technology, EarlyView.
Abstract This study explores the impact of robot–LLM (Large Language Model) integration on collaborative creative writing, focusing on how embodiment and AI creativity influence various aspects of creative output. A total of 150 undergraduate students participated in a structured experimental design with five collaboration conditions: Human–Human (HH),
Yuqing Liu, Yao Song
wiley   +1 more source

Stylistic language drives perceived moral superiority of LLMs. [PDF]

open access: yesSci Rep
Warren K   +4 more
europepmc   +1 more source

Evaluating Creative Output With Generative Artificial Intelligence: Comparing GPT Models and Human Experts in Idea Evaluation

open access: yesCreativity and Innovation Management, Volume 34, Issue 4, Page 991-1012, December 2025.
ABSTRACT Traditional techniques for evaluating creative outcomes are typically based on evaluations made by human experts. These methods suffer from challenges such as subjectivity, biases, limited availability, ‘crowding’, and high transaction costs. We propose that large language models (LLMs) can be used to overcome these shortcomings.
Theresa Kranzle, Katelyn Sharratt
wiley   +1 more source

Home - About - Disclaimer - Privacy