Results 51 to 60 of about 6,156,320 (380)

Semi‐supervised classification of fundus images combined with CNN and GCN

open access: yesJournal of Applied Clinical Medical Physics, Volume 23, Issue 12, December 2022., 2022
Abstract Purpose Diabetic retinopathy (DR) is one of the most serious complications of diabetes, which is a kind of fundus lesion with specific changes. Early diagnosis of DR can effectively reduce the visual damage caused by DR. Due to the variety and different morphology of DR lesions, automatic classification of fundus images in mass screening can ...
Sixu Duan   +8 more
wiley   +1 more source

Least-to-Most Prompting Enables Complex Reasoning in Large Language Models [PDF]

open access: yesInternational Conference on Learning Representations, 2022
Chain-of-thought prompting has demonstrated remarkable performance on various natural language reasoning tasks. However, it tends to perform poorly on tasks which requires solving problems harder than the exemplars shown in the prompts.
Denny Zhou   +9 more
semanticscholar   +1 more source

Uniqueness of radiomic features in non‐small cell lung cancer

open access: yesJournal of Applied Clinical Medical Physics, Volume 23, Issue 12, December 2022., 2022
Abstract Purpose The uniqueness of radiomic features, combined with their reproducibility, determines the reliability of radiomic studies. This study is to test the hypothesis that radiomic features extracted from a defined region of interest (ROI) are unique to the underlying structure (e.g., tumor). Approach Two cohorts of non‐small cell lung cancer (
Gary Ge, Jie Zhang
wiley   +1 more source

CodeT5+: Open Code Large Language Models for Code Understanding and Generation [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2023
Large language models (LLMs) pretrained on vast source code have achieved prominent progress in code intelligence. However, existing code LLMs have two main limitations in terms of architecture and pretraining tasks.
Yue Wang   +5 more
semanticscholar   +1 more source

Large Language Models are not Fair Evaluators [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2023
In this paper, we uncover a systematic bias in the evaluation paradigm of adopting large language models~(LLMs), e.g., GPT-4, as a referee to score and compare the quality of responses generated by candidate models.
Peiyi Wang   +8 more
semanticscholar   +1 more source

Developmental Scaffolding with Large Language Models

open access: yes2023 IEEE International Conference on Development and Learning (ICDL), 2023
Exploratoration and self-observation are key mechanisms of infant sensorimotor development. These processes are further guided by parental scaffolding accelerating skill and knowledge acquisition. In developmental robotics, this approach has been adopted often by having a human acting as the source of scaffolding.
Celik, M. Batuhan   +3 more
openaire   +4 more sources

SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2023
Generative Large Language Models (LLMs) such as GPT-3 are capable of generating highly fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate facts and make non-factual statements which can undermine trust in their ...
Potsawee Manakul, Adian Liusie, M. Gales
semanticscholar   +1 more source

Autoformalization with Large Language Models

open access: yes, 2022
Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs. A successful autoformalization system could advance the fields of formal verification, program synthesis, and artificial intelligence.
Wu, Y   +6 more
openaire   +3 more sources

Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [PDF]

open access: yesInternational Conference on Learning Representations, 2023
Time series forecasting holds significant importance in many real-world dynamic systems and has been extensively studied. Unlike natural language process (NLP) and computer vision (CV), where a single large model can tackle multiple tasks, models for ...
Ming Jin   +10 more
semanticscholar   +1 more source

A Simple and Effective Pruning Approach for Large Language Models [PDF]

open access: yesInternational Conference on Learning Representations, 2023
As their size increases, Large Languages Models (LLMs) are natural candidates for network pruning methods: approaches that drop a subset of network weights while striving to preserve performance.
Mingjie Sun   +3 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy