RagModel:v0
Path
Value
raw_data_artifact
raw_data
embedding_model
sentence-transformers/all-MiniLM-L6-v2
device
cpu
embedding_model_norm_embed
true
chunk_size
500
chunk_overlap
0
vectorstore_path
./faiss_index
rag_prompt
Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Helpful Answer:
chat_model
gpt-4
cm_max_new_tokens
128
cm_quantize
false
cm_temperature
0.1
retrieval_chain_type
stuff
inference_batch_size
16
model
RetrievalQA(combine_documents_chain=StuffDocumentsChain(llm_chain=LLMChain(prompt=PromptTemplate(input_variables=['context', 'question'], template="Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer: \n"), llm=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x2df5c1350>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x2dec8b110>, model_name='gpt-4', temperature=0.1, openai_api_key=SecretStr('**********'), openai_proxy='', max_tokens=128)), document_variable_name='context'), return_source_documents=True, retriever=VectorStoreRetriever(tags=['FAISS', 'HuggingFaceEmbeddings'], vectorstore=<langchain_community.vectorstores.faiss.FAISS object at 0x2947b7e90>))