Abstract:
Disclosed are a video frame coding method performed at a terminal. The method includes: obtaining and coding an ith video frame in a group of pictures, and counting a quantity of actually consumed bits corresponding to the ith video frame; detecting a state of the ith video frame based on the quantity of actually consumed bits, an initial average bit rate of the group of pictures, a quantization model, and a video frame detection rule, and determining multiple quantization parameters of a (i+1)th video frame; and determining first quantities of allocated bits for compensation of first to-be-compensated video frames; updating the first coding rule according to a first quantity of allocated bits for compensation, the quantization parameters corresponding to the (i+1)th video frame, and coding the (i+1)th video frame.
Abstract:
This application discloses a link decision-making method, applied to a link decision-making system including first and second user equipment that are in a Voice over Internet Protocol (VoIP) call status, a transit server, and a decision-making computing device. A first link directly connecting the two user equipment and a second link that transits by using the transit server exist between the two user equipment. The decision-making computing device is responsible for making, according to a link quality score of a current sending link between the first and second user equipment, a decision about using the first link or the second link as a subsequent sending link of the first user equipment. According to the link decision-making method provided in this application, it can be ensured that a better link can be selected to transmit a data stream of VoIP, thereby improving the quality of service (QoS) of the VoIP.
Abstract:
The application discloses video data redundancy control methods and apparatuses. Video packet redundancy control information is determined according to packet loss at a reception apparatus. The video packet redundancy control information is received from the reception apparatus. Video data is encoded according to the video packet redundancy control information to obtain encoded video data of a plurality of frames by a transmission apparatus. A frame-level redundancy budget is allocated for one of the plurality of frames according to the video packet redundancy control information. Further, the one of the plurality of frames is packetized according to the frame-level redundancy budget to generate a packetized frame. Redundancy coding is performed on the packetized frame to generate video packets including data packets and redundant packets for transmission to the reception apparatus.
Abstract:
A method, a system, and an apparatus are provided for sharing application information. The method receives match request sent by a mobile terminal X upon receipt of a share instruction of a user. According to the match request, the method determines whether there is a mobile terminal that is matched with the mobile terminal X among other mobile terminals that send match request, and if a mobile terminal is found, sends a success message. When a server receives identifiers of applications to be shared sent from any one mobile terminal of the matched mobile terminals, the method determines whether the other mobile terminal of the matched mobile terminals is connected to the server, if the other mobile terminal is online, obtains relevant information of the application corresponding to each identifier respectively, and sends the obtained relevant information to the other mobile terminal.
Abstract:
The application discloses video data redundancy control methods and apparatuses. Video packet redundancy control information is determined according to packet loss at a reception apparatus. The video packet redundancy control information is received from the reception apparatus. Video data is encoded according to the video packet redundancy control information to obtain encoded video data of a plurality of frames by a transmission apparatus. A frame-level redundancy budget is allocated for one of the plurality of frames according to the video packet redundancy control information. Further, the one of the plurality of frames is packetized according to the frame-level redundancy budget to generate a packetized frame. Redundancy coding is performed on the packetized frame to generate video packets including data packets and redundant packets for transmission to the reception apparatus.
Abstract:
Video data processing method and apparatus are provided. The method includes: encoding, by an encoder side, obtained original video data according to a hierarchical P-frame prediction HPP structure to obtain an HPP bitstream; redundancy-coding the HPP bitstream according to a forward error correction FEC code, redundancy packet quantities in frames in the HPP bitstream progressively decreasing from lower to higher temporal layers to which the frames belong in the HPP structure; and sorting the frames in the redundancy-coded HPP bitstream and sequentially sending the frames in the redundancy-coded HPP bitstream to a decoder side.
Abstract:
Disclosed are a video frame coding method performed at a terminal. The method includes: obtaining and coding an ith video frame in a group of pictures, and counting a quantity of actually consumed bits corresponding to the ith video frame; detecting a state of the ith video frame based on the quantity of actually consumed bits, an initial average bit rate of the group of pictures, a quantization model, and a video frame detection rule, and determining multiple quantization parameters of a (i+1)th video frame; and determining first quantities of allocated bits for compensation of first to-be-compensated video frames; updating the first coding rule according to a first quantity of allocated bits for compensation, the quantization parameters corresponding to the (i+1)th video frame, and coding the (i+1)th video frame.
Abstract:
The present application provides a video encoding method that includes setting frame types for a video sequence; obtaining a B frame; determining whether a current macroblock of the B frame satisfies a Direct prediction mode, and if yes determining whether the current macroblock satisfies a Skip prediction mode; if the current macroblock does not meet either mode, computing at least one of a mode cost after performing motion compensation on the current macroblock using two bidirectional prediction motion vectors obtained in the Direct prediction mode; a mode cost after performing motion compensation on the current macroblock using a forward prediction motion vector obtained in the Direct prediction mode; and a mode cost after performing motion compensation on the current macroblock using a backward prediction motion vector obtained in the Direct prediction mode; and selecting a mode with a smallest cost as an optimal prediction direction to encode the current macroblock.