Abstract:
A projection processor includes a receiving circuit and an image processing circuit. The receiving circuit receives an input image. The image processing circuit performs at least one predetermined image processing operation upon the input image to generate an output image, wherein a projection source is generated according to the output image. The projection source is displayed or projected by a projection source component of an electronic device, such that a first cover of a projection display component partially reflects the projection source.
Abstract:
Methods and apparatus of compression for pre-stitched pictures captured by multiple cameras of a panoramic video capture device are disclosed. At the encoder side, stitching information associated with a stitching process to form the pre-stitched pictures is used to encode a current block according to embodiments of the present invention, where the stitching information comprises calibration data, matching results, seam position, blending level, sensor data, or a combination thereof. In one embodiment, the stitching information corresponds to matching results associated with a projection process, and projection-based Inter prediction is used to encode the current block by projecting a reference block in a reference pre-stitched picture to coordinates of the current block. In another embodiment, the stitching information corresponds to seam information associated with seam detection, and seam-based Inter prediction is used to encode the current block by utilizing the seam information.
Abstract:
A controller for generating an output image to be rendered on a transparent display panel is provided. The controller is configured to: receive an input image; calculate an opacity of each pixel in the input image according to a predetermined equation associated with the transparent display panel; determine a display mode of one or more portions of the input image according to transparency indication information associated with the one or more portions of the input image, wherein the display mode corresponds to transparency of the one or more portions in the input image; and obtain the output image to be displayed on the transparent display panel according to the determined display mode of the one or more portion of the input image.
Abstract:
A display method for video conferencing and an associated video conferencing system are provided. The video conferencing system includes a display, an image capturing unit, and a network interface unit. The method includes the steps of: utilizing the image capturing unit to capture images of a local user in a video conference; performing foreground segmentation on the captured images to obtain a foreground object; flipping the foreground object horizontally; identifying a human face from the flipped foreground object and correcting a facing angle of the human face; determining interaction data from the local user on the display; encoding the interaction data and the flipped foreground object into an interaction stream and a video stream, respectively; packing the interaction stream and the video stream into an output stream; and transmitting the output stream to a remote user of the video conference through the network interface unit.
Abstract:
Methods and apparatuses pertaining to a simulated transparent device may involve capturing a first image of a surrounding of the display with a first camera, as well as capturing a second image of the user with a second camera. The methods and apparatuses may further involve constructing a see-through window of the first image, wherein, when presented on the display, the see-through window substantially matches the surrounding and creates a visual effect with which at least a portion of the display is substantially transparent to the user. The methods and apparatuses may further involve presenting the see-through window on the display. The constructing of the see-through window may involve computing a set of cropping parameters, a set of deforming parameters, or a combination of both, based on a spatial relationship among the surrounding, the display, and the user.
Abstract:
A wearable device interactive system and techniques, methods and apparatuses thereof are described. A wearable device may sense a movement by a user wearing the wearable device. The wearable device may also determine whether a path of the movement corresponds to one or more predefined patterns. The wearable device may further perform one or more operations in response to a determination that the path of the movement corresponds to at least one of the one or more predefined patterns.
Abstract:
A wearable device interactive system and techniques, methods and apparatuses thereof are described. A wearable device may sense a user input by a hand of the user, analyze the user input, and perform one or more operations responsive to a result of the analysis. For example, the wearable device may launch an application corresponding to the user input. As another example, the wearable device may recognize a text at a fingertip of the user and determine a location of the wearable device to determine a context, and launch an application corresponding to the context.
Abstract:
A system for providing image or video to be displayed by a projective display system includes: an encoding subsystem and a packing subsystem. The encoding subsystem is configured to encode at least one image or video of a subject to generate encoded image data. The packing subsystem is coupled to the encoding subsystem, and configured to pack the encoded image data with projection configuration information regarding the projective display system to generate packed image data. The projective display system comprises a projection source device and a projection surface, the projection source device projects the image or video to the projection surface according to the packed image data.