Results 301 to 310 of about 11,579,038 (330)
Some of the next articles are maybe not open access.

SPT-Code: Sequence-to-Sequence Pre-Training for Learning Source Code Representations

International Conference on Software Engineering, 2022
Recent years have seen the successful application of large pretrained models to code representation learning, resulting in substantial improvements on many code-related downstream tasks.
Changan Niu   +5 more
semanticscholar   +1 more source

What Do They Capture? - A Structural Analysis of Pre-Trained Language Models for Source Code

International Conference on Software Engineering, 2022
Recently, many pre-trained language models for source code have been proposed to model the context of code and serve as a basis for downstream code intelligence tasks such as code completion, code search, and code summarization.
Yao Wan   +5 more
semanticscholar   +1 more source

Retrieval-based Neural Source Code Summarization

International Conference on Software Engineering, 2020
Source code summarization aims to automatically generate concise summaries of source code in natural language texts, in order to help developers better understand and maintain source code.
Jian Zhang   +4 more
semanticscholar   +1 more source

A Novel Neural Source Code Representation Based on Abstract Syntax Tree

International Conference on Software Engineering, 2019
Exploiting machine learning techniques for analyzing programs has attracted much attention. One key problem is how to represent code fragments well for follow-up analysis.
Jian Zhang   +5 more
semanticscholar   +1 more source

DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence

arXiv.org
We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of
DeepSeek-AI   +39 more
semanticscholar   +1 more source

Magicoder: Source Code Is All You Need

arXiv.org, 2023
Yuxiang Wei   +4 more
semanticscholar   +1 more source

DeepSeek-Coder: When the Large Language Model Meets Programming - The Rise of Code Intelligence

arXiv.org
The rapid development of large language models has revolutionized code intelligence in software development. However, the predominance of closed-source models has restricted extensive research and development.
Daya Guo   +12 more
semanticscholar   +1 more source

Source-code Similarity Detection and Detection Tools Used in Academia

ACM Transactions on Computing Education, 2019
Teachers deal with plagiarism on a regular basis, so they try to prevent and detect plagiarism, a task that is complicated by the large size of some classes. Students who cheat often try to hide their plagiarism (obfuscate), and many different similarity
Matija Novak, M. Joy, D. Kermek
semanticscholar   +1 more source

Retrieval on source code: a neural code search

MAPL@PLDI, 2018
Searching over large code corpora can be a powerful productivity tool for both beginner and experienced developers because it helps them quickly find examples of code related to their intent.
Saksham Sachdev   +5 more
semanticscholar   +1 more source

Nanofluidics for osmotic energy conversion

Nature Reviews Materials, 2021
Zhen Zhang, Liping Wen, Lei Jiang
exaly  

Home - About - Disclaimer - Privacy