Results 221 to 230 of about 174,416 (267)
Some of the next articles are maybe not open access.

MMCode: Benchmarking Multimodal Large Language Models for Code Generation with Visually Rich Programming Problems

Conference on Empirical Methods in Natural Language Processing
Programming often involves converting detailed and complex specifications into code, a process during which developers typically utilize visual aids to more effectively convey concepts.
Kaixin Li   +4 more
semanticscholar   +1 more source

CodeContests+: High-Quality Test Case Generation for Competitive Programming

Conference on Empirical Methods in Natural Language Processing
Competitive programming, due to its high reasoning difficulty and precise correctness feedback, has become a key task for both training and evaluating the reasoning capabilities of large language models (LLMs).
Zihan Wang   +4 more
semanticscholar   +1 more source

Breaking the Programming Language Barrier: Multilingual Prompting to Empower Non-Native English Learners

Proceedings of the 27th Australasian Computing Education Conference
Non-native English speakers (NNES) face multiple barriers to learning programming. These barriers can be obvious, such as the fact that programming language syntax and instruction are often in English, or more subtle, such as being afraid to ask for help
J. Prather   +13 more
semanticscholar   +1 more source

SolEval: Benchmarking Large Language Models for Repository-level Solidity Code Generation

arXiv.org
Large language models (LLMs) have transformed code generation. However, most existing approaches focus on mainstream languages such as Python and Java, neglecting the Solidity language, the predominant programming language for Ethereum smart contracts ...
Zhiyuan Peng   +6 more
semanticscholar   +1 more source

CodeIF: Benchmarking the Instruction-Following Capabilities of Large Language Models for Code Generation

Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)
With the rapid advancement of Large Language Models (LLMs), the demand for robust instruction-following capabilities in code generation tasks has grown significantly. Code generation not only facilitates faster prototyping and automated testing, but also
Kaiwen Yan   +5 more
semanticscholar   +1 more source

Exploring Code Language Models for Automated HLS-based Hardware Generation: Benchmark, Infrastructure and Analysis

Asia and South Pacific Design Automation Conference
Recent advances in code generation have illuminated the potential of employing large language models (LLMs) for general-purpose programming languages such as Python and C++, opening new opportunities for automating software development and enhancing ...
Jiahao Gai   +6 more
semanticscholar   +1 more source

DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models

Conference on Empirical Methods in Natural Language Processing
We introduce DA-Code, a code generation benchmark specifically designed to assess LLMs on agent-based data science tasks. This benchmark features three core elements: First, the tasks within DA-Code are inherently challenging, setting them apart from ...
Yiming Huang   +10 more
semanticscholar   +1 more source

LLASP: Fine-tuning Large Language Models for Answer Set Programming

International Conference on Principles of Knowledge Representation and Reasoning
Recently, Large Language Models (LLMs) have showcased their potential in various natural language processing tasks, including code generation. However, while significant progress has been made in adapting LLMs to generate code for several imperative ...
Erica Coppolillo   +4 more
semanticscholar   +1 more source

Prompting and Fine-tuning Large Language Models for Automated Code Review Comment Generation

arXiv.org
Generating accurate code review comments remains a significant challenge due to the inherently diverse and non-unique nature of the task output. Large language models pretrained on both programming and natural language data tend to perform well in code ...
Md. Asif Haider   +4 more
semanticscholar   +1 more source

Using Large Language Models for Student-Code Guided Test Case Generation in Computer Science Education

arXiv.org
In computer science education, test cases are an integral part of programming assignments since they can be used as assessment items to test students' programming knowledge and provide personalized feedback on student-written code.
Nischal Ashok Kumar, Andrew Lan
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy