English 中文(简体)
如何通过LangChains检索源文件——只有当答案来自习俗知识基础时,才会有相关性。
原标题:How to retrieve source documents via LangChain s get_relevant_documents method only if the answer is from the custom knowledge base

I am making a chatbot which accesses an external knowledge base docs. I want to get the relevant documents the bot accessed for its answer, but this shouldn t be the case when the user input is something like "hello", "how are you", "what s 2+2", or any answer that is not retrieved from the external knowledge base docs. In this case, I want retriever.get_relevant_documents(query) or any other line to return an empty list or something similar.

import os
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import ConversationalRetrievalChain 
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate

os.environ[ OPENAI_API_KEY ] =   

custom_template = """
This is conversation with a human. Answer the questions you get based on the knowledge you have.
If you don t know the answer, just say that you don t, don t try to make up an answer.
Chat History:
{chat_history}
Follow Up Input: {question}
"""
CUSTOM_QUESTION_PROMPT = PromptTemplate.from_template(custom_template)

llm = ChatOpenAI(
    model_name="gpt-3.5-turbo",  # Name of the language model
    temperature=0  # Parameter that controls the randomness of the generated responses
)

embeddings = OpenAIEmbeddings()

docs = [
    "Buildings are made out of brick",
    "Buildings are made out of wood",
    "Buildings are made out of stone",
    "Buildings are made out of atoms",
    "Buildings are made out of building materials",
    "Cars are made out of metal",
    "Cars are made out of plastic",
  ]

vectorstore = FAISS.from_texts(docs, embeddings)

retriever = vectorstore.as_retriever()

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

qa = ConversationalRetrievalChain.from_llm(
    llm,
    retriever,
    condense_question_prompt=CUSTOM_QUESTION_PROMPT,
    memory=memory
)

query = "what are cars made of?"
result = qa({"question": query})
print(result)
print(retriever.get_relevant_documents(query))

I tried setting a threshold for the retriever but I still get relevant documents with high similarity scores. And in other user prompts where there is a relevant document, I do not get back any relevant documents.

retriever = vectorstore.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": .9})
最佳回答

To solve this problem, I had to change the chain type to RetrievalQA and introduce agents and tools.

import os
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import RetrievalQA
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.agents import AgentExecutor, Tool,initialize_agent
from langchain.agents.types import AgentType

os.environ[ OPENAI_API_KEY ] =   

system_message = """
"You are the XYZ bot."
"This is conversation with a human. Answer the questions you get based on the knowledge you have."
"If you don t know the answer, just say that you don t, don t try to make up an answer."
"""

llm = ChatOpenAI(
    model_name="gpt-3.5-turbo",  # Name of the language model
    temperature=0  # Parameter that controls the randomness of the generated responses
)

embeddings = OpenAIEmbeddings()

docs = [
    "Buildings are made out of brick",
    "Buildings are made out of wood",
    "Buildings are made out of stone",
    "Buildings are made out of atoms",
    "Buildings are made out of building materials",
    "Cars are made out of metal",
    "Cars are made out of plastic",
  ]

vectorstore = FAISS.from_texts(docs, embeddings)

retriever = vectorstore.as_retriever()

memory = ConversationBufferMemory(memory_key="chat_history", input_key= input , return_messages=True, output_key= output )

qa = RetrievalQA.from_chain_type(
        llm=llm,
        chain_type="stuff",
        retriever=vectorstore.as_retriever(),
        verbose=True,
        return_source_documents=True
    )

tools = [
        Tool(
            name="doc_search_tool",
            func=qa,
            description=(
               "This tool is used to retrieve information from the knowledge base"
            )
        )
    ]

agent = initialize_agent(
        agent = AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
        tools=tools,
        llm=llm,
        memory=memory,
        return_source_documents=True,
        return_intermediate_steps=True,
        agent_kwargs={"system_message": system_message}
        )

query1 = "what are buildings made of?"
result1 = agent(query1)


query2 = "who are you?"
result2 = agent(query2)

如果获得结果的来源,它将具有关键<代码>“中程>的数值,则源文件可通过<代码>result1[“中程”][0][1] [“来源_documents”][查阅。

否则,当查询确实需要来源时,result2 [“intermediate_pans”]就会空。

问题回答

如下文所述,在链中添加“return_source_documents”的参数

qa = ConversationalRetrievalChain.from_llm(
    llm,
    retriever,
    condense_question_prompt=CUSTOM_QUESTION_PROMPT,
    memory=memory,
    return_source_documents=True
)

query = "what are cars made of?"
result = qa({"question": query})

因此,你将获得你的来源文件,同时获得许多类似文件。

查阅所有相关文件

answer = result.get("answer")

docs = result.get("source_documents", [])

在座谈一下,我只能补充一点意见。 我的问题是:当你加入代理人时,你的答复是否缩短? 你们是否能够解决这个问题?

这有助于我获得资料来源:

for x in range(len(response["source_documents"][0].metadata)):
    print(response["source_documents"][x].metadata)




相关问题
Can Django models use MySQL functions?

Is there a way to force Django models to pass a field to a MySQL function every time the model data is read or loaded? To clarify what I mean in SQL, I want the Django model to produce something like ...

An enterprise scheduler for python (like quartz)

I am looking for an enterprise tasks scheduler for python, like quartz is for Java. Requirements: Persistent: if the process restarts or the machine restarts, then all the jobs must stay there and ...

How to remove unique, then duplicate dictionaries in a list?

Given the following list that contains some duplicate and some unique dictionaries, what is the best method to remove unique dictionaries first, then reduce the duplicate dictionaries to single ...

What is suggested seed value to use with random.seed()?

Simple enough question: I m using python random module to generate random integers. I want to know what is the suggested value to use with the random.seed() function? Currently I am letting this ...

How can I make the PyDev editor selectively ignore errors?

I m using PyDev under Eclipse to write some Jython code. I ve got numerous instances where I need to do something like this: import com.work.project.component.client.Interface.ISubInterface as ...

How do I profile `paster serve` s startup time?

Python s paster serve app.ini is taking longer than I would like to be ready for the first request. I know how to profile requests with middleware, but how do I profile the initialization time? I ...

Pragmatically adding give-aways/freebies to an online store

Our business currently has an online store and recently we ve been offering free specials to our customers. Right now, we simply display the special and give the buyer a notice stating we will add the ...

Converting Dictionary to List? [duplicate]

I m trying to convert a Python dictionary into a Python list, in order to perform some calculations. #My dictionary dict = {} dict[ Capital ]="London" dict[ Food ]="Fish&Chips" dict[ 2012 ]="...

热门标签