Results 131 to 140 of about 63,129 (302)
GPT classifications, with application to credit lending
Generative Pre-trained Transformers (GPT) and Large language models (LLMs) have made significant advancements in natural language processing in recent years.
Golnoosh Babaei, Paolo Giudici
doaj +1 more source
Embracing large language model (LLM) technologies in hydrology research
The growing complexity of hydrological systems necessitates innovative approaches to data management, knowledge management, and model development. Large Language Models (LLMs) have great potential to revolutionize hydrological research by unifying and advancing these three critical aspects.
Zewei Ma +8 more
openaire +1 more source
Large‐scale Hopfield neural networks (HNNs) for associative computing are implemented using vertical NAND (VNAND) flash memory. The proposed VNAND HNN with the asynchronous update scenario achieve robust image restoration performance despite fabrication variations, while significantly reducing chip area (≈117× smaller than resistive random‐access ...
Jin Ho Chang +4 more
wiley +1 more source
FTGRN introduces an LLM‐enhanced framework for gene regulatory network inference through a two‐stage workflow. It combines a Transformer‐based model, pretrained on GPT‐4 derived gene embeddings and regulatory knowledge, with a fine‐tuning stage utilizing single‐cell RNA‐seq data.
Guangzheng Weng +7 more
wiley +1 more source
Can Large Language Models abstract Medical Coded Language?
Large Language Models (LLMs) have become a pivotal research area, potentially making beneficial contributions in fields like healthcare where they can streamline automated billing and decision support.
Lee, Simon A., Lindsey, Timothy
core
Large Language Models (LLMs) for Electronic Design Automation (EDA)
Accepted by IEEE International System-on-Chip ...
Xu, Kangwei +12 more
openaire +2 more sources
Illustration of text data mining of rare earth mineral thermodynamic parameters with the large language model‐powered LMExt. A dataset is built with mined thermodynamic properties. Subsequently, a machine learning model is trained to predict formation enthalpy from the dataset.
Juejing Liu +6 more
wiley +1 more source
Robust Dysarthric Speech Recognition with GAN Enhancement and LLM Correction
This study tackles dysarthric speech recognition by combining generative adversarial network (GAN)‐generated synthetic data with large language model (LLM)‐based error correction. The approach integrates three key elements: an improved CycleGAN to generate synthetic dysarthric speech for data augmentation, a multimodal automatic speech recognition core
Yibo He +3 more
wiley +1 more source
Quantization‐aware training creates resource‐efficient structured state space sequential S4(D) models for ultra‐long sequence processing in edge AI hardware. Including quantization during training leads to efficiency gains compared to pure post‐training quantization.
Sebastian Siegel +5 more
wiley +1 more source
This paper presents an integrated AI‐driven cardiovascular platform unifying multimodal data, predictive analytics, and real‐time monitoring. It demonstrates how artificial intelligence—from deep learning to federated learning—enables early diagnosis, precision treatment, and personalized rehabilitation across the full disease lifecycle, promoting a ...
Mowei Kong +4 more
wiley +1 more source

