Abstract:
A method trains an inference model on two-hop NLI problems that include a first and second premise and a hypothesis, and further includes generating, by the model using hypothesis reduction, an explanation from an input premise and an input hypothesis, for an input single hop NLI problem. The learning step determines a distribution over extraction starting positions and lengths from within the first premise and hypothesis of a two-hop NLI problem. The learning step k extraction output slots with combinations of words from the first premise of the two-hop NLI problem and fills another extraction output slots with combinations of words from the hypothesis of the two-hop NLI problem. The learning step trains a sequence model by using the extraction output slots and the other extraction output slots together with the second premise as an input to a single-hop NLI classifier to output a label of the two-hop NLI problem.
Abstract:
A system verifies textual claims using a document corpus. The system includes a memory for storing program code and a processor device for running the code to retrieve documents from the corpus based on Term Frequency Inverse Document Frequency (TFIDF) similarity to a set of textual claims. The processor extracts named entities and capitalized phrases from the textual claims. The processor retrieves documents from the corpus with titles matching any of the extracted named entities and capitalized phrases. The processor extracts premise sentences from the retrieved documents. The processor classifies the premise sentences together with sources of the premises sentences against the textual claims to obtain classifications from among possible classifications including a supported, an unverified, or a contradicted classification. The processor aggregates the classifications over the premise sentences to selectively output, for each textual claim, an overall decision of the supported classification, the unverified classification, or the contradicted classification.
Abstract:
Semantic indexing methods and systems are disclosed. One such method is directed to training a semantic indexing model by employing an expanded query. The query can be expanded by merging the query with documents that are relevant to the query for purposes of compensating for a lack of training data. In accordance with another exemplary aspect, time difference features can be incorporated into a semantic indexing model to account for changes in query distributions over time.
Abstract:
Systems and methods for autonomous generation of accurate healthcare summaries. Relevant healthcare questions can be predicted based on a preceding context by employing a fine-tuned transformer model. Answers to the relevant healthcare questions can be predicted by employing an extractive question answering model and utilizing extracted healthcare data from a healthcare data record to obtain predicted healthcare answers. Complete sentences can be synthesized, with artificial intelligence (AI), from the predicted healthcare answers and the relevant healthcare questions to obtain healthcare summary sentences. A healthcare technical report can be generated autonomously with AI from the healthcare summary sentences to assist with a decision making of a healthcare professional.
Abstract:
A computer-implemented method for counting and extracting opinions in product reviews is provided. The method includes inputting a hypothesis opinion, a product name, and product reviews relating to a product, applying a decontextualization component to the product reviews by using the product name, applying the decontextualization component to the hypothesis opinion by using the product name, applying an entailment model to classify each sentence of the decontextualized product reviews against the decontextualized hypothesis opinion, and outputting one or more sentences classified as entailing the hypothesis opinion and a count of corresponding reviews.
Abstract:
Methods and systems for disentangled data generation include accessing a dataset including pairs, each formed from a given input text structure and a given style label for the input text structures. An encoder is trained to disentangle a sequential text input into disentangled representations, including a content embedding and a style embedding, based on a subset of the dataset, using an objective function that includes a regularization term that minimizes mutual information between the content embedding and the style embedding. A generator is trained to generate a text output that includes content from the style embedding, expressed in a style other than that represented by the style embedding of the text input.
Abstract:
A computer-implemented method is provided for generating following up questions for multi-hop bridge-type question answering. The method includes retrieving a premise for an input multi-hop bridge-type question. The method further includes assigning, by a three-way neural network based controller, a classification of the premise against the input multi-hop bridge-type question as being any of irrelevant, including a final answer, or including intermediate information. The method also includes outputting the final answer in relation to a first hop of the multi-hop bridge-type question responsive to the classification being including the final answer. The method additionally includes generating a followup question by a neural network and repeating said retrieving, assigning, outputting and generating steps for the followup question, responsive to the classification being including the intermediate information.
Abstract:
A computer-implemented method and system are provided for teaching syntax for training a neural network based natural language inference model. The method includes selectively performing, by the hardware processor, person reversal on a set of hypothesis sentences, based on person reversal prevention criteria, to obtain a first training data set. The method further includes enhancing, by the hardware processor, a robustness of the neural network based natural language inference model to syntax changes by training the neural network based natural language inference model on original training data combined with the first data set.
Abstract:
A system verifies textual claims using a document corpus. The system includes a memory for storing program code and a processor device for running the code to retrieve documents from the corpus based on Term Frequency Inverse Document Frequency (TFIDF) similarity to a set of textual claims. The processor extracts named entities and capitalized phrases from the textual claims. The processor retrieves documents from the corpus with titles matching any of the extracted named entities and capitalized phrases. The processor extracts premise sentences from the retrieved documents. The processor classifies the premise sentences together with sources of the premises sentences against the textual claims to obtain classifications from among possible classifications including a supported, an unverified, or a contradicted classification. The processor aggregates the classifications over the premise sentences to selectively output, for each textual claim, an overall decision of the supported classification, the unverified classification, or the contradicted classification.
Abstract:
Semantic indexing methods and systems are disclosed. One such method is directed to training a semantic indexing model by employing an expanded query. The query can be expanded by merging the query with documents that are relevant to the query for purposes of compensating for a lack of training data. In accordance with another exemplary aspect, time difference features can be incorporated into a semantic indexing model to account for changes in query distributions over time.