METHOD TO DETECT AND FIX HALLUCINATIONS IN GENERATIVE LARGE LANGUAGE MODELS

    公开(公告)号:US20240386253A1

    公开(公告)日:2024-11-21

    申请号:US18358410

    申请日:2023-07-25

    Abstract: A system and method to detect a generative model's output to see if it is hallucinating or not, and to check the facts listed in the model. Additionally, when a hallucination or incorrect fact is detected, the correct fact can be applied as context to ask the model to regenerate the output again, taking the fact into consideration while also optionally lowering the temperature or increasing the top-k values from which to choose. A method is provided comprising: obtaining output produced by the generative model based on input provided to the generative model; performing summarization and topic extraction on the output to obtain one or more topics; performing fact checking on each of the one or more topics to produce a consolidated ground truth context; and declaring that the generative model is hallucinating or not based on the consolidated ground truth context.

Patent Agency Ranking