SHARPNESS-AWARE MINIMIZATION FOR ROBUSTNESS IN SPARSE NEURAL NETWORKS

    公开(公告)号:US20240127067A1

    公开(公告)日:2024-04-18

    申请号:US18459083

    申请日:2023-08-31

    IPC分类号: G06N3/082

    CPC分类号: G06N3/082

    摘要: Systems and methods are disclosed for improving natural robustness of sparse neural networks. Pruning a dense neural network may improve inference speed and reduces the memory footprint and energy consumption of the resulting sparse neural network while maintaining a desired level of accuracy. In real-world scenarios in which sparse neural networks deployed in autonomous vehicles perform tasks such as object detection and classification for acquired inputs (images), the neural networks need to be robust to new environments, weather conditions, camera effects, etc. Applying sharpness-aware minimization (SAM) optimization during training of the sparse neural network improves performance for out of distribution (OOD) images compared with using conventional stochastic gradient descent (SGD) optimization. SAM optimizes a neural network to find a flat minimum: a region that both has a small loss value, but that also lies within a region of low loss.

    AUGMENTING LEGACY NEURAL NETWORKS FOR FLEXIBLE INFERENCE

    公开(公告)号:US20230325670A1

    公开(公告)日:2023-10-12

    申请号:US17820780

    申请日:2022-08-18

    IPC分类号: G06N3/08

    CPC分类号: G06N3/082

    摘要: A technique for dynamically configuring and executing an augmented neural network in real-time according to performance constraints also maintains the legacy neural network execution path. A neural network model that has been trained for a task is augmented with low-compute “shallow” phases paired with each legacy phase and the legacy phases of the neural network model are held constant (e.g., unchanged) while the shallow phases are trained. During inference, one or more of the shallow phases can be selectively executed in place of the corresponding legacy phase. Compared with the legacy phases, the shallow phases are typically less accurate, but have reduced latency and consume less power. Therefore, processing using one or more of the shallow phases in place of one or more of the legacy phases enables the augmented neural network to dynamically adapt to changes in the execution environment (e.g., processing load or performance requirement).

    DYNAMIC NEURAL NETWORK MODEL SPARSIFICATION
    5.
    发明公开

    公开(公告)号:US20240119291A1

    公开(公告)日:2024-04-11

    申请号:US18203552

    申请日:2023-05-30

    IPC分类号: G06N3/082 G06N3/0495

    CPC分类号: G06N3/082 G06N3/0495

    摘要: Machine learning is a process that learns a neural network model from a given dataset, where the model can then be used to make a prediction about new data. In order to reduce the size, computation, and latency of a neural network model, a compression technique can be employed which includes model sparsification. To avoid the negative consequences of pruning a fully pretrained neural network model and on the other hand of training a sparse model in the first place without any recovery option, the present disclosure provides a dynamic neural network model sparsification process which allows for recovery of previously pruned parts to improve the quality of the sparse neural network model.