Results 231 to 240 of about 174,416 (267)
Some of the next articles are maybe not open access.
Generating Feedback-Ladders for Logical Errors in Programming using Large Language Models
Educational Data MiningIn feedback generation for logical errors in programming assignments, large language model (LLM)-based methods have shown great promise. These methods ask the LLM to generate feedback given the problem statement and a student's (buggy) submission.
Hasnain Heickal, Andrew Lan
semanticscholar +1 more source
StarCoder 2 and The Stack v2: The Next Generation
arXiv.orgThe BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder2.
Anton Lozhkov +65 more
semanticscholar +1 more source
MAGE: A Multi-Agent Engine for Automated RTL Code Generation
Design Automation ConferenceThe automatic generation of RTL code (e.g., Verilog) through natural language instructions has emerged as a promising direction with the advancement of large language models (LLMs).
Yujie Zhao +4 more
semanticscholar +1 more source
CodeRAG-Bench: Can Retrieval Augment Code Generation?
North American Chapter of the Association for Computational LinguisticsWhile language models (LMs) have proven remarkably adept at generating code, many programs are challenging for LMs to generate using their parametric knowledge alone.
Z. Z. Wang +6 more
semanticscholar +1 more source
TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators
Annual Meeting of the Association for Computational LinguisticsTriton, a high-level Python-like language designed for building efficient GPU kernels, is widely adopted in deep learning frameworks due to its portability, flexibility, and accessibility.
Jianling Li +11 more
semanticscholar +1 more source
Prompting Techniques for Secure Code Generation: A Systematic Investigation
ACM Transactions on Software Engineering and MethodologyLarge Language Models (LLMs) are gaining momentum in software development with prompt-driven programming enabling developers to create code from Natural Language (NL) instructions. However, studies have questioned their ability to produce secure code and,
Catherine Tony +4 more
semanticscholar +1 more source
EvoCodeBench: An Evolving Code Generation Benchmark with Domain-Specific Evaluations
Neural Information Processing SystemsHow to evaluate Large Language Models (LLMs) in code generation remains an open question. Existing benchmarks have two limitations - data leakage and lack of domain-specific evaluation.
Jia Li +8 more
semanticscholar +1 more source
Design Automation Conference
Multi-agent frameworks with Large Language Models (LLMs) have become promising tools for generating generalpurpose programming languages using test-driven development, allowing developers to create more accurate and robust code.
Charlie Campbell +3 more
semanticscholar +1 more source
Multi-agent frameworks with Large Language Models (LLMs) have become promising tools for generating generalpurpose programming languages using test-driven development, allowing developers to create more accurate and robust code.
Charlie Campbell +3 more
semanticscholar +1 more source
TigerCoder: A Novel Suite of LLMs for Code Generation in Bangla
arXiv.orgDespite being the 5th most spoken language, Bangla remains underrepresented in Large Language Models (LLMs), particularly for code generation. This primarily stems from the scarcity of high-quality data to pre-train and/or finetune such models. Hence, we
Nishat Raihan +2 more
semanticscholar +1 more source
Exploring the Effectiveness of LLMs in Automated Logging Statement Generation: An Empirical Study
IEEE Transactions on Software EngineeringAutomated logging statement generation supports developers in documenting critical software runtime behavior. While substantial recent research has focused on retrieval-based and learning-based methods, results suggest they fail to provide appropriate ...
Yichen Li +7 more
semanticscholar +1 more source

