Abstract:
Apparatuses, systems, and techniques are presented to generate images with one or more visual effects applied. In at least one embodiment, one or more visual effects are applied to one or more images having a resolution that is less than a first resolution and those visual effects approximated for one or more images having a resolution that is greater than or equal to the first resolution.
Abstract:
Apparatuses, systems, and techniques are presented to generate images. In at least one embodiment, one or more neural networks are used to generate one or more images using one or more pixel weights.
Abstract:
Apparatuses, systems, and techniques are presented to generate images. In at least one embodiment, one or more neural networks are used to generate one or more images using one or more pixel weights determined based, at least in part, on one or more sub-pixel offset values.
Abstract:
A processor and a system are provided for performing texturing operations. The processor includes a texture return buffer having a plurality of slots for storing texture values and one or more texture units coupled to the texture return buffer. Each of the slots of the texture return buffer are addressable by a thread. Each texture unit is configured to allocate a slot of the texture return buffer when the texture unit generates a texture value.
Abstract:
A system, method, and computer program product are provided for performing a string search. In use, a first string and a second string are identified. Additionally, a string search is performed, utilizing the first string and the second string.
Abstract:
Apparatuses, systems, and techniques are presented to generate images. In at least one embodiment, one or more neural networks are used to generate one or more images using one or more pixel weights determined based, at least in part, on one or more sub-pixel offset values.
Abstract:
Apparatuses, systems, and techniques are presented to generate images with one or more visual effects applied. In at least one embodiment, one or more visual effects are applied to one or more images having a resolution that is less than a first resolution and those visual effects approximated for one or more images having a resolution that is greater than or equal to the first resolution.
Abstract:
A neural network architecture is disclosed for performing video frame prediction using a sequence of video frames and corresponding pairwise optical flows. The neural network processes the sequence of video frames and optical flows utilizing three-dimensional convolution operations, where time (or multiple video frames in the sequence of video frames) provides the third dimension in addition to the two-dimensional pixel space of the video frames. The neural network generates a set of parameters used to predict a next video frame in the sequence of video frames by sampling a previous video frame utilizing spatially-displaced convolution operations. In one embodiment, the set of parameters includes a displacement vector and at least one convolution kernel per pixel. Generating a pixel value in the next video frame includes applying the convolution kernel to a corresponding patch of pixels in the previous video frame based on the displacement vector.
Abstract:
Apparatuses, systems, and techniques are presented to reconstruct one or more images. In at least one embodiment, one or more circuits are to use one or more neural networks to adjust one or more pixel blending weights.
Abstract:
Apparatuses, systems, and techniques are presented to reconstruct one or more images. In at least one embodiment, one or more objects in an image are caused to be generated based, at least in part, on applying one or more offsets to a motion of the one or more objects relative to one or more prior images.