Results 161 to 170 of about 6,774,423 (345)
Large Language Model‐Based Chatbots in Higher Education
The use of large language models (LLMs) in higher education can facilitate personalized learning experiences, advance asynchronized learning, and support instructors, students, and researchers across diverse fields. The development of regulations and guidelines that address ethical and legal issues is essential to ensure safe and responsible adaptation
Defne Yigci +4 more
wiley +1 more source
Answering table queries on the web using column keywords [PDF]
Rakesh Pimplikar, Sunita Sarawagi
openalex +1 more source
Introduction: Friendship and Emotions [PDF]
[No abstract][No keywords]
Mary Holmes, Silvana Greco
core
Review of Memristors for In‐Memory Computing and Spiking Neural Networks
Memristors uniquely enable energy‐efficient, brain‐inspired computing by acting as both memory and synaptic elements. This review highlights their physical mechanisms, integration in crossbar arrays, and role in spiking neural networks. Key challenges, including variability, relaxation, and stochastic switching, are discussed, alongside emerging ...
Mostafa Shooshtari +2 more
wiley +1 more source
Current Issues and Future Trends in Sociology: Extending the Debate in Sociological Research Online [PDF]
[No abstract][No keywords]
Gayle Letherby
core
Illustration of text data mining of rare earth mineral thermodynamic parameters with the large language model‐powered LMExt. A dataset is built with mined thermodynamic properties. Subsequently, a machine learning model is trained to predict formation enthalpy from the dataset.
Juejing Liu +6 more
wiley +1 more source
Get the gist of the story: Neural map of topic keywords in multi-speaker environment [PDF]
Hyojin Park, Joachim Groß
openalex +1 more source
A Phillips curve interpretation of error-correction models of the wage and price dynamics. [PDF]
No abstractNo ...
Harck, Søren
core
Quantization‐aware training creates resource‐efficient structured state space sequential S4(D) models for ultra‐long sequence processing in edge AI hardware. Including quantization during training leads to efficiency gains compared to pure post‐training quantization.
Sebastian Siegel +5 more
wiley +1 more source

