Results 31 to 40 of about 6,156,320 (380)

MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models [PDF]

open access: yesarXiv.org, 2023
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect
Chaoyou Fu   +12 more
semanticscholar   +1 more source

Knowledge Engineering Using Large Language Models [PDF]

open access: yesTransactions on Graph Data and Knowledge, 2023
Knowledge engineering is a discipline that focuses on the creation and maintenance of processes that generate and apply knowledge. Traditionally, knowledge engineering approaches have focused on knowledge expressed in formal languages.
Allen, Bradley P.   +2 more
doaj   +1 more source

mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality [PDF]

open access: yesarXiv.org, 2023
Large language models (LLMs) have demonstrated impressive zero-shot abilities on a variety of open-ended tasks, while recent research has also explored the use of LLMs for multi-modal generation.
Qinghao Ye   +16 more
semanticscholar   +1 more source

Jailbreaking Black Box Large Language Models in Twenty Queries [PDF]

open access: yesarXiv.org, 2023
There is growing interest in ensuring that large language models (LLMs) align with human values. However, the alignment of such models is vulnerable to adversarial jailbreaks, which coax LLMs into overriding their safety guardrails. The identification of
Patrick Chao   +5 more
semanticscholar   +1 more source

ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs [PDF]

open access: yesInternational Conference on Learning Representations, 2023
Despite the advancements of open-source large language models (LLMs), e.g., LLaMA, they remain significantly limited in tool-use capabilities, i.e., using external tools (APIs) to fulfill human instructions.
Yujia Qin   +17 more
semanticscholar   +1 more source

CPLLM: Clinical prediction with large language models. [PDF]

open access: yesPLOS Digital Health
We present Clinical Prediction with Large Language Models (CPLLM), a method that involves fine-tuning a pre-trained Large Language Model (LLM) for predicting clinical disease and readmission. We utilized quantization and fine-tuned the LLM using prompts.
Ofir Ben Shoham, Nadav Rappoport
doaj   +2 more sources

SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models [PDF]

open access: yesInternational Conference on Machine Learning, 2022
Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference. However, existing methods cannot maintain accuracy and hardware efficiency at the same time.
Guangxuan Xiao   +4 more
semanticscholar   +1 more source

Kosmos-2: Grounding Multimodal Large Language Models to the World [PDF]

open access: yesInternational Conference on Learning Representations, 2023
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown,
Zhiliang Peng   +6 more
semanticscholar   +1 more source

Teaching Large Language Models to Self-Debug [PDF]

open access: yesInternational Conference on Learning Representations, 2023
Large language models (LLMs) have achieved impressive performance on code generation. However, for complex programming tasks, generating the correct solution in one go becomes challenging, thus some prior works have designed program repair approaches to ...
Xinyun Chen   +3 more
semanticscholar   +1 more source

Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models

open access: yesmedRxiv, 2022
We evaluated the performance of a large language model called ChatGPT on the United States Medical Licensing Exam (USMLE), which consists of three exams: Step 1, Step 2CK, and Step 3. ChatGPT performed at or near the passing threshold for all three exams
Tiffany H. Kung   +10 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy