SYSTEM AND METHOD FOR IDENTIFYING PERFORMANCE BOTTLENECKS

    公开(公告)号:WO2023086158A1

    公开(公告)日:2023-05-19

    申请号:PCT/US2022/043561

    申请日:2022-09-15

    IPC分类号: G06F11/36 G06F11/34

    摘要: A computer implemented method includes accessing performance trace data for executed code of multiple services. Symbols corresponding to functions of the executed code are identified. First sequences of functions from the identified symbols are identified and a first performance threshold for each identified first sequence of functions is computed. The method includes receiving an incoming performance trace, detecting second sequences of functions from the incoming performance trace, identifying second sequences equivalent to the first sequences, and comparing performance of the identified second sequences to the first performance threshold for each of the equivalent first sequences to identify second sequences as comprising a performance bottleneck.

    PERFORMANCE BUG DETECTION AND CODE RECOMMENDATION

    公开(公告)号:WO2022154872A1

    公开(公告)日:2022-07-21

    申请号:PCT/US2021/061054

    申请日:2021-11-30

    IPC分类号: G06F8/33

    摘要: An automated system for detecting performance bugs in a program and for providing code recommendations to improve the performance of the program generates a code recommendation table from performance-related pull requests. The performance-related pull requests are identified in part from a classifier trained on semi-supervised data. A code recommendation table is generated from performance-related pull requests and is searched for similarly-improved code based on a set of difference features that includes structural and performance features of the before-code of a pull request that is not in the after-code.

    DISTILLING TRANSFORMERS FOR NEURAL CROSS-DOMAIN SEARCH

    公开(公告)号:WO2023003636A1

    公开(公告)日:2023-01-26

    申请号:PCT/US2022/031701

    申请日:2022-06-01

    摘要: A distillation system extracts knowledge from a large pre-trained sequence-to-sequence neural transformer model into a smaller bi-encoder. The pre-trained sequence-to-sequence neural transformer model is trained to translate data from a first domain into a second domain on a large corpus. A teacher model is generated from the pre-trained model by fine-tuning the pre-trained neural transformer model on a smaller translation task with true translation pairs. The fine-tuned model is then used to generate augmented data values which are used with the true translation pairs to train the bi-encoder. The bi-encoder is used for perform cross-domain searches.