Spatial-temporal graph-to-sequence learning based grounded video descriptions
摘要:
Techniques for generating a grounded video description for a video input are provided. Hierarchical Attention based Spatial-Temporal Graph-to-Sequence Learning framework for producing a GVD is provided by generating an initial graph representing a plurality of object features in a plurality of frames of a received video input and generating an implicit graph for the plurality of object features in the plurality of frames using a similarity function. The initial graph and the implicit graph are combined to form a refined graph and the refined graph is processed using attention processes, to generate an attended hierarchical graph of the plurality of object features for the plurality of frames. The grounded video description is generated for the received video input using at least the hierarchical graph of the plurality of features.
信息查询
0/0