Abstract:
Video content is received at a computing device including camera views provided by video cameras that are aligned to capture images of participants within a defined space. The video cameras are aligned such that a field of view (FOV) for each camera overlaps a portion of the FOV of at least one other adjacent camera. Positions of participants depicted within the video content are detected, where target views are generated to combine as a continuous view of the video content that includes the plurality of detected participants. The target views are displayed at display devices.
Abstract:
A processing system can include an encoder to encode a real-time transmission of a presentation. A memory buffer can copy and store images of the presentation and convert the images into snapshot images. A transmitter can transmit the snapshot images to an external annotation device, and a receiver can receive annotation data of an annotation performed on the snapshot images at the external annotation device. The annotation can be encoded, in accordance with the annotation data, into the real-time transmission of the presentation to display the real-time transmission with the annotation.
Abstract:
A controller controls a camera that produces a sequence of images and that has output coupled to a video encoder. The camera has an operating condition including a field of view and lighting, and one or more imaging parameters. The video encoder encodes images from the camera into codewords. The controller receives one or more encoding properties from the video encoder, and causes adjusting one or more of the imaging parameters based on at least one of the received encoding properties, such that the camera produces additional images of the sequence of images for the video encoder using the adjusted one or more imaging parameters.
Abstract:
Video content is received at a computing device that camera views provided by video cameras that are aligned to capture images of participants within a defined space. Each camera view is at a first resolution and the video cameras are aligned such that a field of view (FOV) for each camera overlaps a portion of the FOV of at least one other adjacent camera. Positions of participants depicted within the video content are detected, where at least one participant is captured by overlapping FOVs of two adjacent camera views. A target view is generated from the first number of camera views, the target view having a second resolution that is lower than the first resolution, and the target view includes a view of the at least one participant captured within the overlapping FOVs of two adjacent camera views. The target view is displayed at a display device.
Abstract:
A video coder includes a forward coder and a reconstruction module determining a motion compensated predicted picture from one or more previously decoded pictures in a multi-picture store. The reconstruction module includes a reference picture predictor that uses only previously decoded pictures to determine one or more predicted reference pictures. The predicted reference picture(s) are used for motion compensated prediction. The reference picture predictor may include optical flow analysis that uses a current decoded picture and that may use one or more previously decoded pictures together with affine motion analysis and image warping to determine at least a portion of at least one of the reference pictures.
Abstract:
A coding method, apparatus, and medium with software encoded thereon to implement a coding method. The coding method includes encoding the position of non-zero-valued coefficients in an ordered series of quantized transform coefficients of a block of image data, including encoding events using variable length coding using a plurality of variable length code mappings that each maps events to codewords, the position encoding including switching between the code mappings based on the context. The coding method further includes encoding amplitudes of the non-zero-valued coefficients using variable dimensional amplitude coding in the reverse order of the original ordering of the series.
Abstract:
Video content is received at a computing device including camera views provided by video cameras that are aligned to capture images of participants within a defined space. The video cameras are aligned such that a field of view (FOV) for each camera overlaps a portion of the FOV of at least one other adjacent camera. Positions of participants depicted within the video content are detected, where target views are generated to combine as a continuous view of the video content that includes the plurality of detected participants. The target views are displayed at display devices.