-
公开(公告)号:US20240119256A1
公开(公告)日:2024-04-11
申请号:US18486534
申请日:2023-10-13
Applicant: Google LLC
Inventor: Andrew Gerald Howard , Mark Sandler , Liang-Chieh Chen , Andrey Zhmoginov , Menglong Zhu
Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
-
公开(公告)号:US20230297852A1
公开(公告)日:2023-09-21
申请号:US18007379
申请日:2021-07-29
Applicant: Google LLC
Inventor: Li Zhang , Andrew Gerald Howard , Brendan Wesley Jou , Yukun Zhu , Mingda Zhang , Andrey Zhmoginov
IPC: G06N5/022
CPC classification number: G06N5/022
Abstract: Example implementations of the present disclosure combine efficient model design and dynamic inference. With a standalone lightweight model, the unnecessary computation on easy examples is avoided and the information extracted by the lightweight model also guide the synthesis of a specialist network from the basis models. With extensive experiments on ImageNet it is shown that a proposed example BasisNet is particularly effective for image classification and a BasisNet-MV3 achieves 80.3% top-1 accuracy with 290 M MAdds without early termination.
-
公开(公告)号:US20210350206A1
公开(公告)日:2021-11-11
申请号:US17382503
申请日:2021-07-22
Applicant: Google LLC
Inventor: Andrew Gerald Howard , Mark Sandler , Liang-Chieh Chen , Andrey Zhmoginov , Menglong Zhu
Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
-
-