Abstract:
A signed media bitstream comprises data units and signature units. Each signature unit is associated with one or more nearby data units and include at least one fingerprint derived from the associated data units and a digital signature of the at least one fingerprint. A storing method comprises: receiving a segment of the media bitstream; identifying N≥2 instances of a repeating data unit in the received segment; pruning up to N−1 of the identified instances of the repeating data unit; and storing the received segment after pruning. A validation method comprises: receiving a segment of the media bitstream stored in accordance with the storing method; and validating a signature unit using a digital signature contained therein. Despite the pruning of the repeating data unit, the received associated data units can be successfully validated, either directly or indirectly, by means of different embodiments herein.
Abstract:
A device, and method of signing a video segment comprising one or more groups of pictures, GOPs, wherein each GOP comprises a header and one or more frames, are disclosed. For each of the one or more GOPs a GOP hash is produced and the GOP hash is digitally signed by means of a digital signature to produce a signed GOP hash. For each GOP except a last GOP of the one or more GOPs the respective signed GOP hash is saved in the header of a subsequent GOP. An additional GOP is added to the video segment after the last GOP of the one or more GOPs, wherein the additional GOP comprising a header and one or more frames. The signed GOP hash of the last GOP of the one or more GOPs is saved in the header of the additional GOP.
Abstract:
A method includes defining a background model of the video sequence by applying a first algorithm, the background model defining whether that spatial area belongs to a background or a foreground in the video sequence, wherein a detected significant change in image data in a spatial area in an image frame relative image data in said spatial area in a preceding image frame is indicative of said spatial area belonging to the foreground; indicating that an idle area of the defined foreground areas is to be transitioned from foreground to background; and determining whether the idle area is to be transitioned by applying a second algorithm to image data of an image frame of the video sequence, the image data at least partly corresponding to the idle area; wherein if the idle area is not to be transitioned, maintaining the idle area as a foreground area in the background model.
Abstract:
The present application relates to detecting if video images captured by a camera are depicting a live scene or a recorded video played on a monitor, display or computer screen, which is setup to hide the scene from the camera. Metadata regarding the mapping operation used to transform image data between different intensity ranges, or bit depths, is included with the video and evaluated in order to determine if a video replay attack has taken place.
Abstract:
There is provided a system, including a tag device and a camera arrangement, for following an object marked by the tag device with a camera. The tag device may measure a horizontal plane position and an altitude of the tag device according to different measurement principles, and transmit the measured values to a camera arrangement. The camera arrangement may measure a horizontal plane position and an altitude related to the camera according to different measurement principles. The camera arrangement may then control the pan and tilt settings of the camera based on differences in horizontal plane positions and altitudes between the camera and the tag device such that the camera is oriented in the direction of the tag device.
Abstract:
A pan-tilt camera is arranged to include a camera head, a stationary unit, an intermediate member arranged between the camera head and the stationary unit, a first rotary joint rotatably connecting the camera head to the intermediate member, and a second rotary joint, rotatably connecting the intermediate member to the stationary unit. A communication path is provided between the camera head and the stationary unit, including an optical waveguide arranged between the camera head and the stationary unit. The optical waveguide has a first end positioned at the first rotary joint to receive light from the camera head through the first rotary joint. The other end of the waveguide is positioned at the second rotary joint and is arranged to send light to the stationary unit through the second rotary joint.
Abstract:
A method for de-interlacing interlaced video includes receiving a first video field and a second video field of an interlaced video frame, generating a first video frame from the first video field and a first synthesized video field, where video data of the first synthesized video field is based exclusively on video data of the first and second video fields, generating a second video frame from the second video field and a second synthesized video field, where video data of the second synthesized video field is based exclusively on the video data of the first and second video fields, and outputting two de-interlaced video frames for every received interlaced video frame. The first (second) synthesized video field is generated by combining image data from the second (first) video field with image data from corresponding lines of an up-scaled first (second) field generated by a scaler.
Abstract:
A method for providing a signed video bitstream suitable for transcoding from a first video format into a second video format, the method comprising: obtaining first video data in a lossy first video format; reconstructing a video sequence from the first video data; encoding the reconstructed video sequence as second video data in a second video format; computing first fingerprints of the first video data and second fingerprints of the second video data; deriving a first bitstring from the first fingerprints and a second bitstring from the second fingerprints; and providing a signed video bitstream, which includes the first video data and signature units, each signature unit including a first digital signature of the derived first bitstring and second digital signature of the derived second bitstring.
Abstract:
A device and a method of signing an encoded video sequence comprising: obtaining an encoded video sequence composed of encoded image frames; generating a set of one of more frame fingerprints for each encoded image frame; generating a document comprising a header of a supplemental information unit, and a representation of the generated sets of one or more frame fingerprints; generating a document signature by digitally signing the document; generating the supplemental information unit to only consist of the document, the document signature and an indication of an end of the supplemental information unit; and signing the encoded video sequence by associating the generated supplemental information unit with the encoded video sequence.
Abstract:
A method of signing prediction-coded video data, comprising: obtaining a coded video sequence including at least one I-frame (I), which contains independently decodable image data, and at least one predicted frame (P1, P2, P3, P4), which contains image data decodable by reference to at least one other frame; generating a fingerprint (HI) of each I-frame; generating a fingerprint (HP) of each predicted frame by hashing a combination of data derived from the predicted frame and data derived from an I-frame to which the predicted frame refers directly or indirectly, wherein the fingerprint of the predicted frame is independent of any further predicted frame to which the predicted frame refers directly or indirectly; and providing a signature of the video sequence including the generated fingerprints.