您的位置:首页 > 财经 > 产业 > How to detect whether ConversationalRetrievalChain called the OpenAI LLM?

How to detect whether ConversationalRetrievalChain called the OpenAI LLM?

2024/10/6 14:28:22 来源:https://blog.csdn.net/suiusoar/article/details/141281982  浏览:    关键词:How to detect whether ConversationalRetrievalChain called the OpenAI LLM?

题意:如何检测 ConversationalRetrievalChain 是否调用了 OpenAI 的 LLM?

问题背景:

I have the following code:

我的代码如下:

chat_history = []
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(chunks, embeddings)
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0.1), db.as_retriever())
result = qa({"question": "What is stack overflow", "chat_history": chat_history})

The code creates embeddings, creates a FAISS in-memory vector db with some text that I have in chunks array, then it creates a ConversationalRetrievalChain, followed by asking a question.

代码先创建了嵌入,然后使用 `chunks` 数组中的一些文本创建了一个 FAISS 内存向量数据库,接着创建了一个 ConversationalRetrievalChain,最后提出了一个问题。

Based on what I understand from ConversationalRetrievalChain, when asked a question, it will first query the FAISS vector db, then, if it can't find anything matching, it will go to OpenAI to answer that question. (is my understanding correct?)

根据我对 ConversationalRetrievalChain 的理解,当提出一个问题时,它会首先查询 FAISS 向量数据库,然后如果找不到匹配的内容,它会转向 OpenAI 来回答该问题。(我的理解正确吗?)

How can I detect if it actually called OpenAI to get the answer or it was able to get it from the in-memory vector DB? The result object contains questionchat_history and answer properties and nothing else.

如何检测它实际上是调用了 OpenAI 来获取答案,还是从内存向量数据库中获取了答案?`result` 对象包含 `question`、`chat_history` 和 `answer` 属性,除此之外没有其他内容。

问题解决:

"Based on what I understand from ConversationalRetrievalChain, when asked a question, it will first query the FAISS vector db, then, if it can't find anything matching, it will go to OpenAI to answer that question."

“根据我对 ConversationalRetrievalChain 的理解,当提出一个问题时,它会首先查询 FAISS 向量数据库,然后如果找不到匹配的内容,它会转向 OpenAI 来回答该问题。”

This part is not correct. Each time ConversationalRetrievalChain receives your query in conversation, it will rephrase the question, and retrieves documents from your vector store(It is FAISS in your case), and returns answers generated by LLMs(It is OpenAI in your case). Meaning that ConversationalRetrievalChain is the conversation version of RetrievalQA.

这一部分不正确。每次 ConversationalRetrievalChain 在对话中接收到你的查询时,它都会重新措辞问题,并从你的向量存储(在你的情况下是 FAISS)中检索文档,然后返回由 LLM(在你的情况下是 OpenAI)生成的答案。这意味着 ConversationalRetrievalChain 是 RetrievalQA 的对话版本。

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com