Abstract:
Described herein are techniques related to noise reduction for image sequences or videos. This Abstract is submitted with the understanding that it will not be used to interpret or limit the scope and meaning of the claims. A noise reduction tool includes a motion estimator configured to estimated motion in the video, a noise spectrum estimator configured to estimate noise in the video, a shot detector configured to trigger the noise estimation process, a noise spectrum validator configured to validate the estimated noise spectrum, and a noise reducer to reduce noise in the video using the estimated noise spectrum.
Abstract:
Systems and methods for displaying a simplified version of a modification of a media content item on a mobile device are provided. The mobile device can receive, via a user interface presented on the mobile device, a request for a desired modification of an original media content item. The mobile device can perform a simplified version of the desired modification of the original media content item. The mobile device can present a preview of the modified media content item in the user interface. The mobile device can transmit, to another computing device, the original media content item with the request for the desired modification.
Abstract:
Systems and methods for displaying a simplified version of a modification of a media content item on a mobile device are provided. The mobile device can receive, via a user interface presented on the mobile device, a request for a desired modification of an original media content item. The mobile device can perform a simplified version of the desired modification of the original media content item. The mobile device can present a preview of the modified media content item in the user interface. The mobile device can transmit, to another computing device, the original media content item with the request for the desired modification.
Abstract:
Implementations disclose mutual noise estimation for videos. A method includes determining an optimal frame noise variance for intensity values of each frame of frames of a video, the optimal frame noise variance based on a determined relationship between spatial variance and temporal variance of the intensity values of homogeneous blocks in the frame, identifying an optimal video noise variance for the video based on optimal frame noise variances of the frames of the video, selecting, for each frame of the video, one or more of the blocks having a spatial variance that is less than the optimal video noise variance, the one or more frames selected as the homogeneous blocks, and utilizing the selected homogeneous blocks to estimate a noise signal of the video.
Abstract:
Systems and methods are described for identifying the video content as spherical video or non-spherical video in response to determining that frame scores and video scores satisfy a threshold level. For example, a plurality of image frames can be extracted from video content, classified in a dual stage process, and scored according to particular classification and scoring mechanisms.
Abstract:
A method for determining the position of multiple cameras relative to each other includes at a processor, receiving video data from at least one video recording taken by each camera; selecting a subset of frames of each video recording, including determining relative blurriness of each frame of each video recording, selecting frames having a lowest relative blurriness, counting features points in each of the lowest relative blurriness frames, and selecting for further analysis, lowest relative blurriness frames having a highest count of feature points; and processing each selected subset of frames from each video recording to estimate the location and orientation of each camera.
Abstract:
Systems and methods are described for identifying the video content as spherical video or non-spherical video in response to determining that frame scores and video scores satisfy a threshold level. For example, a plurality of image frames can be extracted from video content, classified in a dual stage process, and scored according to particular classification and scoring mechanisms.
Abstract:
A system for video stabilization is provided. The system includes a media component, a transformation component, an offset component and a zoom component. The media component receives a video sequence including at least a first video frame and a second video frame. The transformation component calculates at least a first motion parameter associated with translational motion for the first video frame and at least a second motion parameter associated with the translational motion for the second video frame. The offset component subtracts an offset value generated as a function of a maximum motion parameter and a minimum motion parameter from the first motion parameter and the second motion parameter to generate a set of modified motion parameters. The zoom component determines a zoom value for the video sequence based at least in part on the set of modified motion parameters.
Abstract:
An interactive multi-view module identifies a plurality of media items associated with a real-world event, each of the plurality of media items comprising a video portion and an audio portion. The interactive multi-view module synchronizes the audio portions of each of the plurality of media items according to a common reference timeline, determines a relative geographic position associated with each of the plurality of media items and presents the plurality of media items in an interactive multi-view player interface based at least on the synchronized audio portions and the relative geographic positions.
Abstract:
A system for video stabilization is provided. The system includes a media component, a transformation component, an offset component and a zoom component. The media component receives a video sequence including at least a first video frame and a second video frame. The transformation component calculates at least a first motion parameter associated with translational motion for the first video frame and at least a second motion parameter associated with the translational motion for the second video frame. The offset component subtracts an offset value generated as a function of a maximum motion parameter and a minimum motion parameter from the first motion parameter and the second motion parameter to generate a set of modified motion parameters. The zoom component determines a zoom value for the video sequence based at least in part on the set of modified motion parameters.