Abstract:
Video content is received at a computing device that camera views provided by video cameras that are aligned to capture images of participants within a defined space. Each camera view is at a first resolution and the video cameras are aligned such that a field of view (FOV) for each camera overlaps a portion of the FOV of at least one other adjacent camera. Positions of participants depicted within the video content are detected, where at least one participant is captured by overlapping FOVs of two adjacent camera views. A target view is generated from the first number of camera views, the target view having a second resolution that is lower than the first resolution, and the target view includes a view of the at least one participant captured within the overlapping FOVs of two adjacent camera views. The target view is displayed at a display device.
Abstract:
A processing system can include an encoder to encode a real-time transmission of a presentation. A memory buffer can copy and store images of the presentation and convert the images into snapshot images. A transmitter can transmit the snapshot images to an external annotation device, and a receiver can receive annotation data of an annotation performed on the snapshot images at the external annotation device. The annotation can be encoded, in accordance with the annotation data, into the real-time transmission of the presentation to display the real-time transmission with the annotation.
Abstract:
Techniques are provided for establishing a videoconference session between participants at different endpoints, where each endpoint includes at least one computing device and one or more displays. A plurality of video streams is received at an endpoint, and each video stream is classified as at least one of a people view and a data view. The classified views are analyzed to determine one or more regions of interest for each of the classified views, where at least one region of interest has a size smaller than a size of the classified view. Synthesized views of at least some of the video streams are generated, wherein the synthesized views include at least one view including a region of interest, and views including the synthesized views are rendered at one or more displays of an endpoint device.
Abstract:
Video frames are captured at one or more cameras during a video conference session, where each video frame includes a digital image with a plurality of pixels. Depth values associated with each pixel are determined in at least one video frame, where each depth value represents a distance of a portion of the digital image represented by at least one corresponding pixel from the one or more cameras that capture the at least one video frame. Luminance values of pixels are adjusted within captured video frames based upon the depth values determined for the pixels so as to achieve relighting of the video frames as the video frames are displayed during the video conference session.
Abstract:
Techniques are provided for receiving and decoding a sequence of video frames at a computing device, and analyzing a current video frame N to determine whether to skip or render the current video frame N for display by the computing device. The analyzing includes generating color histograms of the current video frame N and one or more previous video frames, determining a difference value representing a difference between the current video frame N and a previous video frame N−K, where K>0, the difference value being based upon the generated color histograms, in response to the difference value not exceeding a threshold value, rendering the current video frame N or a recently rendered video frame N−K using the current video frame, and in response to the difference value exceeding the threshold value, skipping the current video frame N from being rendered.
Abstract:
Video frames are captured at one or more cameras during a video conference session, where each video frame includes a digital image with a plurality of pixels. Depth values associated with each pixel are determined in at least one video frame, where each depth value represents a distance of a portion of the digital image represented by at least one corresponding pixel from the one or more cameras that capture the at least one video frame. Luminance values of pixels are adjusted within captured video frames based upon the depth values determined for the pixels so as to achieve relighting of the video frames as the video frames are displayed during the video conference session.
Abstract:
A controller controls a camera that produces a sequence of images and that has output coupled to a video encoder. The camera has an operating condition including a field of view and lighting, and one or more imaging parameters. The video encoder encodes images from the camera into codewords. The controller receives one or more encoding properties from the video encoder, and causes adjusting one or more of the imaging parameters based on at least one of the received encoding properties, such that the camera produces additional images of the sequence of images for the video encoder using the adjusted one or more imaging parameters.
Abstract:
Techniques are provided for establishing a videoconference session between participants at different endpoints, where each endpoint includes at least one computing device and one or more displays. A plurality of video streams is received at an endpoint, and each video stream is classified as at least one of a people view and a data view. The classified views are analyzed to determine one or more regions of interest for each of the classified views, where at least one region of interest has a size smaller than a size of the classified view. Synthesized views of at least some of the video streams are generated, wherein the synthesized views include at least one view including a region of interest, and views including the synthesized views are rendered at one or more displays of an endpoint device.
Abstract:
A coding method, apparatus, and medium with software encoded thereon to implement a coding method. The coding method includes encoding the position of non-zero-valued coefficients in an ordered series of quantized transform coefficients of a block of image data, including encoding events using variable length coding using a plurality of variable length code mappings that each maps events to codewords, the position encoding including switching between the code mappings based on the context. The coding method further includes encoding amplitudes of the non-zero-valued coefficients using variable dimensional amplitude coding in the reverse order of the original ordering of the series.
Abstract:
A video coder includes a forward coder and a reconstruction module determining a motion compensated predicted picture from one or more previously decoded pictures in a multi-picture store. The reconstruction module includes a reference picture predictor that uses only previously decoded pictures to determine one or more predicted reference pictures. The predicted reference picture(s) are used for motion compensated prediction. The reference picture predictor may include optical flow analysis that uses a current decoded picture and that may use one or more previously decoded pictures together with affine motion analysis and image warping to determine at least a portion of at least one of the reference pictures.