Results 51 to 60 of about 63,129 (302)
Adapting Large Language Model with Speech for Fully Formatted End-to-End Speech Recognition
Most end-to-end (E2E) speech recognition models are composed of encoder and decoder blocks that perform acoustic and language modeling functions. Pretrained large language models (LLMs) have the potential to improve the performance of E2E ASR.
Gong, Yifan +7 more
core
Machine Learning for Green Solvents: Assessment, Selection and Substitution
Environmental regulations have intensified demand for green solvents, but discovery is limited by Solvent Selection Guides (SSGs) that quantify solvent sustainability. Training a machine learning model on GlaxoSmithKline SSG, a database of sustainability metrics for 10,189 solvents, GreenSolventDB is developed. Integrated with Hansen solubility metrics,
Rohan Datta +4 more
wiley +1 more source
Are Large Language Models Good Fact Checkers: A Preliminary Study
Recently, Large Language Models (LLMs) have drawn significant attention due to their outstanding reasoning capabilities and extensive knowledge repository, positioning them as superior in handling various natural language processing tasks compared to ...
Cao, Han +4 more
core
A Perspective on Interactive Theorem Provers in Physics
Into an interactive theorem provers (ITPs), one can write mathematical definitions, theorems and proofs, and the correctness of those results is automatically checked. This perspective goes over the best usage of ITPs within physics and motivates the open‐source community run project PhysLean, the aim of which is to be a library for digitalized physics
Joseph Tooby‐Smith
wiley +1 more source
Spectral Decomposition of Chemical Semantics for Activity Cliffs‐Aware Molecular Property Prediction
PrismNet mimics chemical intuition by functioning as a computational prism, refracting molecular graphs into complementary semantic views and spectral frequencies. This dual‐decomposition strategy effectively captures both global topologies and subtle “activity cliff” perturbations.
Chaoyang Xie +9 more
wiley +1 more source
Evaluating Quantized Llama 2 Models for IoT Privacy Policy Language Generation
Quantized large language models are large language models (LLMs) optimized for model size while preserving their efficacy. They can be executed on consumer-grade computers without the powerful features of dedicated servers needed to execute regular (non ...
Bhavani Malisetty, Alfredo J. Perez
doaj +1 more source
In our opinion the exuberance surrounding the relative success of data-driven large language models (LLMs) is slightly misguided and for several reasons (i) LLMs cannot be relied upon for factual information since for LLMs all ingested text (factual or ...
Saba, Walid S.
core
Multi‐View Biomedical Foundation Models for Molecule‐Target and Property Prediction
Molecular foundation models can provide accurate predictions for a large set of downstream tasks. We develop MMELON, an approach that integrates pre‐trained graph, image, and text foundation models and validate our multi‐view model on over 120 tasks, including GPCR binding.
Parthasarathy Suryanarayanan +17 more
wiley +1 more source
His‐MMDM: Multi‐Domain and Multi‐Omics Translation of Histopathological Images with Diffusion Models
His‐MMDM is a diffusion model‐based framework for scalable multi‐domain and multi‐omics translation of histopathological images, enabling tasks from virtual staining, cross‐tumor knowledge transfer, and omics‐guided image editing. ABSTRACT Generative AI (GenAI) has advanced computational pathology through various image translation models.
Zhongxiao Li +13 more
wiley +1 more source
Generative AI Chatbots Across Domains: A Systematic Review
The rapid advancement of large language models (LLMs) has significantly transformed the development and deployment of generative AI chatbots across various domains.
Lama Aldhafeeri +6 more
doaj +1 more source

