Weight demodulation for a generative neural network

    公开(公告)号:US11605001B2

    公开(公告)日:2023-03-14

    申请号:US17160585

    申请日:2021-01-28

    Abstract: A style-based generative network architecture enables scale-specific control of synthesized output data, such as images. During training, the style-based generative neural network (generator neural network) includes a mapping network and a synthesis network. During prediction, the mapping network may be omitted, replicated, or evaluated several times. The synthesis network may be used to generate highly varied, high-quality output data with a wide variety of attributes. For example, when used to generate images of people's faces, the attributes that may vary are age, ethnicity, camera viewpoint, pose, face shape, eyeglasses, colors (eyes, hair, etc.), hair style, lighting, background, etc. Depending on the task, generated output data may include images, audio, video, three-dimensional (3D) objects, text, etc.

    GENERATIVE NEURAL NETWORKS WITH REDUCED ALIASING

    公开(公告)号:US20220405880A1

    公开(公告)日:2022-12-22

    申请号:US17562494

    申请日:2021-12-27

    Abstract: Systems and methods are disclosed that improve output quality of any neural network, particularly an image generative neural network. In the real world, details of different scale tend to transform hierarchically. For example, moving a person's head causes the nose to move, which in turn moves the skin pores on the nose. Conventional generative neural networks do not synthesize images in a natural hierarchical manner: the coarse features seem to mainly control the presence of finer features, but not the precise positions of the finer features. Instead, much of the fine detail appears to be fixed to pixel coordinates which is a manifestation of aliasing. Aliasing breaks the illusion of a solid and coherent object moving in space. A generative neural network with reduced aliasing provides an architecture that exhibits a more natural transformation hierarchy, where the exact sub-pixel position of each feature is inherited from underlying coarse features.

    THREE-DIMENSIONAL TOMOGRAPHY RECONSTRUCTION PIPELINE

    公开(公告)号:US20220189100A1

    公开(公告)日:2022-06-16

    申请号:US17365574

    申请日:2021-07-01

    Abstract: A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle.

    END-TO-END TRAINING FOR A THREE-DIMENSIONAL TOMOGRAPHY RECONSTRUCTION PIPELINE

    公开(公告)号:US20220189011A1

    公开(公告)日:2022-06-16

    申请号:US17365645

    申请日:2021-07-01

    Abstract: A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle.

    SMOOTHING REGULARIZATION FOR A GENERATIVE NEURAL NETWORK

    公开(公告)号:US20210150357A1

    公开(公告)日:2021-05-20

    申请号:US17160648

    申请日:2021-01-28

    Abstract: A style-based generative network architecture enables scale-specific control of synthesized output data, such as images. During training, the style-based generative neural network (generator neural network) includes a mapping network and a synthesis network. During prediction, the mapping network may be omitted, replicated, or evaluated several times. The synthesis network may be used to generate highly varied, high-quality output data with a wide variety of attributes. For example, when used to generate images of people's faces, the attributes that may vary are age, ethnicity, camera viewpoint, pose, face shape, eyeglasses, colors (eyes, hair, etc.), hair style, lighting, background, etc. Depending on the task, generated output data may include images, audio, video, three-dimensional (3D) objects, text, etc.

Patent Agency Ranking