Results 241 to 250 of about 11,357,928 (275)
Some of the next articles are maybe not open access.

Related searches:

Magicoder: Source Code Is All You Need

arXiv.org, 2023
We introduce Magicoder, a series of fully open-source (code, weights, and data) Large Language Models (LLMs) for code that significantly closes the gap with top code models while having no more than 7B parameters.
Yuxiang Wei   +4 more
semanticscholar   +1 more source

What Do They Capture? - A Structural Analysis of Pre-Trained Language Models for Source Code

International Conference on Software Engineering, 2022
Recently, many pre-trained language models for source code have been proposed to model the context of code and serve as a basis for downstream code intelligence tasks such as code completion, code search, and code summarization.
Yao Wan   +5 more
semanticscholar   +1 more source

SPT-Code: Sequence-to-Sequence Pre-Training for Learning Source Code Representations

International Conference on Software Engineering, 2022
Recent years have seen the successful application of large pretrained models to code representation learning, resulting in substantial improvements on many code-related downstream tasks.
Changan Niu   +5 more
semanticscholar   +1 more source

Retrieval-based Neural Source Code Summarization

International Conference on Software Engineering, 2020
Source code summarization aims to automatically generate concise summaries of source code in natural language texts, in order to help developers better understand and maintain source code.
Jian Zhang   +4 more
semanticscholar   +1 more source

A Novel Neural Source Code Representation Based on Abstract Syntax Tree

International Conference on Software Engineering, 2019
Exploiting machine learning techniques for analyzing programs has attracted much attention. One key problem is how to represent code fragments well for follow-up analysis.
Jian Zhang   +5 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy