Abstract:
A method performed by an electronic device is described. The method includes performing vertical processing of a depth map to determine a vertical non-obstacle estimation. The method also includes performing horizontal processing of the depth map to determine a horizontal non-obstacle estimation. The method further includes combining the vertical non-obstacle estimation and the horizontal non-obstacle estimation. The method additionally includes generating a non-obstacle map based on the combination of the vertical and horizontal non-obstacle estimations.
Abstract:
This disclosure describes methods and apparatus for decoding data. In one aspect, the method comprises decoding encoded video data to obtain decoded video frame data, the encoded video data comprising encoded video frame data encoded at a first frame rate and embedded data. The method further comprises determining a camera parameter from the embedded data and up-converting the decoded video frame data to a second frame rate based on the camera parameter. The determined camera parameter may be, for example, a parameter associated with one or more of a zoom factor, an auto focus status, lens position information, frame luma information, an auto exposure (AE) convergence status, an automatic white balance (AWB) convergence status, global motion information, and frame blurriness information, and the like. An encoding device may embed the camera parameter(s) in an encoded video bit stream for a decoder to utilize during frame rate up-conversion.
Abstract:
The disclosure is directed to techniques for encoder-assisted adaptive interpolation of video frames. According to the disclosed techniques, an encoder generates information to assist a decoder in interpolation of a skipped video frame, i.e., an S frame. The information permits the decoder to reduce visual artifacts in the interpolated frame and thereby achieve improved visual quality. The information may include interpolation equation labels that identify selected interpolation equations to be used by the decoder for individual video blocks. As an option, to conserve bandwidth, the equation labels may be transmitted for only selected video blocks that meet a criterion for encoder-assisted interpolation. Other video blocks without equation labels may be interpolated according to a default interpolation technique.
Abstract:
A classifier for detecting objects in images can be configured to receive features of an image from a feature extractor. The classifier can determine a feature window based on the received features, and allows access by each decision tree of the classifier to only a predetermined area of the feature window. Each decision tree of the classifier can compare a corresponding predetermined area of the feature window with one or more thresholds. The classifier can determine an object in the image based on the comparisons. In some examples, the classifier can determine objects in a feature window based on received features, where the received features are based on color information for an image.
Abstract:
A method performed by an electronic device is described. The method includes generating a depth map of a scene external to a vehicle. The method also includes performing first processing in a first direction of a depth map to determine a first non-obstacle estimation of the scene. The method also includes performing second processing in a second direction of the depth map to determine a second non-obstacle estimation of the scene. The method further includes combining the first non-obstacle estimation and the second non-obstacle estimation to determine a non-obstacle map of the scene. The combining includes combining comprises selectively using a first reliability map of the first processing and/or a second reliability map of the second processing The method additionally includes navigating the vehicle using the non-obstacle map.
Abstract:
A method performed by an electronic device is described. The method includes generating a depth map of a scene external to a vehicle. The method also includes performing first processing in a first direction of a depth map to determine a first non-obstacle estimation of the scene. The method also includes performing second processing in a second direction of the depth map to determine a second non-obstacle estimation of the scene. The method further includes combining the first non-obstacle estimation and the second non-obstacle estimation to determine a non-obstacle map of the scene. The combining includes combining comprises selectively using a first reliability map of the first processing and/or a second reliability map of the second processing The method additionally includes navigating the vehicle using the non-obstacle map.
Abstract:
A method performed by an electronic device is described. The method includes performing vertical processing of a depth map to determine a vertical non-obstacle estimation. The method also includes performing horizontal processing of the depth map to determine a horizontal non-obstacle estimation. The method further includes combining the vertical non-obstacle estimation and the horizontal non-obstacle estimation. The method additionally includes generating a non-obstacle map based on the combination of the vertical and horizontal non-obstacle estimations.
Abstract:
A method performed by an electronic device is described. The method includes generating a plurality of bounding regions based on an image. The method also includes determining a subset of the plurality of bounding regions based on at least one criterion and a selected area in the image. The method further includes processing the image based on the subset of the plurality of bounding regions.
Abstract:
A method performed by an electronic device is described. The method includes obtaining a motion vector map based on at least two images. The motion vector map has fewer motion vectors than a number of pixels in each of the at least two images. The method also includes obtaining a feature point from one of the at least two images. The method further includes performing a matching operation between a template associated with the feature point and at least one search space based on the motion vector map. The method additionally includes determining a motion vector corresponding to the feature point based on the matching operation.
Abstract:
A method performed by an electronic device is described. The method includes generating a plurality of bounding regions based on an image. The method also includes determining a subset of the plurality of bounding regions based on at least one criterion and a selected area in the image. The method further includes processing the image based on the subset of the plurality of bounding regions.