Abstract:
The present invention provides a depth generation method. The method includes obtaining a left two-dimensional (2D) image and a right two-dimensional image, each having a first image resolution; scaling the left 2D image and the right 2D image to obtain a scaled left 2D image and a scaled right 2D image, each having a second image resolution; and generating an output depth map based on the scaled left 2D image and the scaled right 2D image.
Abstract:
A method and apparatus for recognizing an imaged information-bearing medium, a computer-readable storage device and a computer device are provided. The method comprising: acquiring a first image of the imaged information-bearing medium; performing text recognition on the first image to acquire a text content of the imaged information-bearing medium; classifying the imaged information-bearing medium to acquire a type of the imaged information-bearing medium; and archiving the text content according to the type.
Abstract:
The present disclosure provides a depth map generation apparatus, including a camera assembly with at least three cameras, an operation mode determination module and a depth map generation module. The camera assembly with at least three cameras may a first camera, a second camera and a third camera that are sequentially aligned on a same axis. The operation mode determination module may be configured to determine an operation mode of the camera assembly. The operation mode includes at least: a first mode using images of non-adjacent cameras, and a second mode using images of adjacent cameras. Further, the depth map generation module may be configured to generate depth maps according to the determined operation mode.
Abstract:
A translation pen includes: a pen body, an indication component, an image collector and a first processor. The pen body has a pen tip end. The indication component is arranged on the pen tip end. The image collector is arranged on the pen body, and the image collector is configured to: collect an image including a text to be translated according to a position indicated by the indication component, and send the image collected. The first processor is arranged in the pen body and electrically connected to the image collector, and the first processor is configured to: receive the image sent by the image collector, and recognize the text to be translated in the image
Abstract:
A method for video classification includes: extracting an original image and an optical flow image corresponding to a to-be-classified video from the to-be-classified video; inputting the original image to a space-domain convolutional neural network model to obtain a space-domain classification result corresponding to the to-be-classified video; inputting the optical flow image to a time-domain convolutional neural network model to obtain a time-domain classification result corresponding to the to-be-categorized video, wherein the time-domain convolutional neural network model and the space-domain convolutional neural network model are convolutional neural network models of different network architectures; and merging the space-domain classification result and the time-domain classification result to obtain a classification result corresponding to the to-be-classified video.
Abstract:
A hand detection method, a hand segmentation method, an image detection method and system, a storage medium, and a device are provided. The image detection method includes: determining a first starting point in a connected domain of an image to be detected; determining n farthest extremum points different from the first starting point, wherein an Nth farthest extremum point is a pixel point in the connected domain having a maximum geodesic distance to an Nth starting point, an (N+1)th starting point is the Nth farthest extremum point, and n and N are both positive integers; performing out region growing with the n farthest extremum points as initial points respectively, to acquire n regions in the connected domain; judging whether a relationship between a preset feature of each region and a preset feature of the connected domain satisfies a selection condition, to determine an available region satisfying the selection condition.
Abstract:
A heuristic finger detection method based on a depth image is disclosed. The method includes the steps of: acquiring a hand connected region from a user's depth image; calculating the central point of the hand connected region; calculating a plurality of extremely far points in the hand connected region that have extremum 3D geodesic distances from the central point; detecting fingertips and finger regions from the plurality of calculated extremely far points; and outputting fingertip positions and the finger regions. The method calculates and detects fingertips of users by means of 3D geodesic distance, without extracting boundary contours of hand regions, which improves robustness of gesture detection and reduces detection error rates. The method has the advantages of higher finger detection accuracy and fast computing speed.
Abstract:
The present disclosure provides a depth determination method, a depth determination device and an electronic device. The depth determination method includes steps of: acquiring a color image and a depth image from a camera; performing image identification based on the color image, and determining a first image region of the color image where a feature object is recorded; determining a second image region of the depth image corresponding to the first image region in accordance with a correspondence between pixels of the color image and the depth image; and determining a feature depth of the feature object based on depth information corresponding to the pixels at the second image region.
Abstract:
The present disclosure provides a gesture control method, comprising: A gesture control method, comprising: acquiring an image; performing a gesture detection on the image to recognize a gesture from the image; determining, if no gesture is recognized from the image, whether a time interval from a last gesture detection, in which a gesture was recognized, to the gesture detection is less than a preset time period; tracking, if the time interval is less than the preset time period, the gesture in the image based on a comparative gesture which is a gesture recognized last time or tracked last time; and updating the comparative gesture with a currently recognized gesture or a currently tracked gesture.
Abstract:
A character recognition method includes: performing feature extraction on an image to be recognized to obtain a first feature map; processing the first feature map to at least obtain N first candidate carrier detection boxes, each first candidate carrier detection box being configured to outline a region of a character carrier; screening the N first candidate carrier detection boxes to obtain K first target carrier detection boxes; performing a feature extraction on the first feature map to obtain a second feature map; processing the second feature map to obtain L first candidate character detection boxes, each first candidate character detection box being configured to outline a region containing at least one character; screening the L first candidate character detection boxes to obtain J first target character detection boxes; and recognizing characters in the J first target character detection boxes to obtain J target character informations.