-
公开(公告)号:WO2023069186A1
公开(公告)日:2023-04-27
申请号:PCT/US2022/041133
申请日:2022-08-23
发明人: CLEMENT, Colin Bruce , DENG, Shao Kun , SUNDARESAN, Neelakantan , SVYATKOVSKIY, Alexey , TUFANO, Michele
摘要: A test-driven development system utilizes a neural transformer model with attention to generate method bodies for a focal method given its associated test cases, and optionally a method signature and a docstring of the focal method. The candidate method bodies are validated for syntactic correctness, tested using the given test cases, and tested with a donor class in a target system. Those candidate method bodies passing the validation and testing are then ranked based on a PLUM score that analyzes the candidate method bodies against various quality and performance metrics.
-
公开(公告)号:WO2022265737A1
公开(公告)日:2022-12-22
申请号:PCT/US2022/028660
申请日:2022-05-11
发明人: CLEMENT, Colin Bruce , DENG, Shao Kun , DRAIN, Dawn , SUNDARESAN, Neelakantan , SVYATKOVSKIY, Alexey , TIAN, Yiding , TUFANO, Michele , WANG, Paul An-Chieh , WU, Chen , YOU, Dongjiang
摘要: A cloud platform includes several web services that facilitate the automated tuning and deployment of pre-trained deep learning models configured for software engineering tasks. The automated tuning and deployment allow a developer to fine-tune a pre-existing model without having access to the parameters of the pre-existing and the fine-tuned model in a manner that does not require user management input. The cloud platform provides a set of files for each pre-trained models used to automatically build a fine-tuning infrastructure to fine-tune a model and a deployment infrastructure that deploys the fine-tuned model without requiring user input.
-
公开(公告)号:WO2022225689A1
公开(公告)日:2022-10-27
申请号:PCT/US2022/023204
申请日:2022-04-03
发明人: WILSON-THOMAS, Mark Alistair , SIMMONS, Jonathan Keith , PUGH, David Ellis , LIM, Vivian Julia , LI, Anqi , SRINATH, Shwetha , OBANDO CHACON, German David , JANG, Jin Woo , FU, Shengyu , DENG, Shao Kun
IPC分类号: G06F40/166 , G06F40/274 , G06F3/048 , G06F8/33 , G06F8/35
摘要: Edit automation enhancements may be implemented in source code editors and other text editors. Provisional selections that indicate user intentions are submitted to a suggestion generator with other edit context information, to improve the quality of generated text suggestions and reduce the cognitive load on users. A provisional selection may include a highlighted completion list entry, or document text targeted by a hovering cursor, or metainformation text targeted by the hovering cursor, for example. An inline grey text suggestion driven by provisional selection may be displayed simultaneously with completion list suggestions that were created without regard to provisional selection. Suggestions driven by provisional selection may be interleaved with existing document text. Suggestions may be accepted fully in one gesture, or in parts. Suggestions may be edited by a user before being accepted, driving further suggestion refinement. Multiple suggestions may be displayed simultaneously, reducing pressure on the suggestion generator.
-
公开(公告)号:WO2021230995A1
公开(公告)日:2021-11-18
申请号:PCT/US2021/025839
申请日:2021-04-06
发明人: DENG, Shao Kun , JIN, Matthew Glenn , LAHIRI, Shuvendu K. , LIU, Xiaoyu , SHI, Xin , SUNDARESAN, Neelakantan
摘要: Language interoperability between source code programs not compatible with an interprocedural static code analyzer is achieved through language-independent representations of the programs. The source code programs are transformed into respective intermediate language instructions from which a language-independent control flow graph and a language-independent type environment is created. A program compatible with the interprocedural static code analyzer is generated from the language-independent control flow graph and the language-independent type environment in order to utilize the interprocedural static code analyzer to detect memory safety faults.
-
公开(公告)号:WO2022245467A1
公开(公告)日:2022-11-24
申请号:PCT/US2022/026082
申请日:2022-04-25
发明人: ALLAMANIS, Miltiadis , GUO, Daya , DENG, Shao Kun , SUNDARESAN, Neelakantan , SVYATKOVSKIY, Alexey
摘要: A code completion tool uses a neural transformer model with attention to generate syntactically-correct candidates with holes to complete a partially-formed code snippet. The model is trained to predict the expansion of non-terminal symbols of the production rules of the underlying grammar of the code snippet without being constrained to a left-to-right expansion order. A hole is a non-terminal symbol of the grammar of a programming language that marks a position in a candidate where the code completion engine is not certain of the production rule that should be used to expand the non-terminal symbol. The hole allows the code completion engine to expand other non-terminal symbols in a candidate and allow the user to guide the expansion of the holes in a candidate.
-
公开(公告)号:WO2021231007A1
公开(公告)日:2021-11-18
申请号:PCT/US2021/026738
申请日:2021-04-10
摘要: An automated program repair tool utilizes a neural transformer model with attention to predict the contents of a bug repair in the context of source code having a bug of an identified bug type. The neural transformer model is trained on a large unsupervised corpus of source code using a span-masking denoising optimization objective, and fine-tuned on a large supervised dataset of triplets containing a bug-type annotation, software bug, and repair. The bug-type annotation is derived from an interprocedural static code analyzer. A bug type edit centroid is computed for each bug type and used in the inference decoding phase to generate the bug repair.
-
公开(公告)号:WO2021021322A2
公开(公告)日:2021-02-04
申请号:PCT/US2020/037102
申请日:2020-06-11
IPC分类号: G06F8/33 , G06N5/02 , G06N7/00 , G06N20/00 , G06F16/9027 , G06F17/16 , G06F17/18 , G06N3/088 , G06N5/04 , G06N7/005
摘要: A code completion tool uses a neural transformer model to generate candidate sequences to complete a line of source code. The neural transformer model is trained using a conditional language modeling objective on a large unsupervised dataset that includes source code programs written in several different programming languages. The neural transformer model is used within a beam search that predicts the most likely candidate sequences for a code snippet under development.
-
-
-
-
-
-