Multi-turn dialogue response generation with autoregressive transformer models
摘要:
Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in the dialogue data better than the existing RNN-based architectures. Additionally, machine classifiers may model the joint distribution of the context and response as opposed to the conditional distribution of the response given the context as employed in sequence-to-sequence frameworks. Machine classifiers in accordance with embodiments further append random paddings before and/or after the input data to reduce the syntactic redundancy in the input data, thereby improving the performance of the machine classifiers for a variety of dialogue-related tasks. The random padding of the input data may further provide regularization during the training of the machine classifier and/or reduce exposure bias. In a variety of embodiments, the input data may be encoded based on subword tokenization.
信息查询
0/0