RagModel:v7
Path
Value
raw_data_artifact
raw_data
embedding_model
sentence-transformers/all-MiniLM-L6-v2
device
cpu
embedding_model_norm_embed
true
chunk_size
500
chunk_overlap
0
vectorstore_path
./faiss_index
rag_prompt
Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Helpful Answer:
chat_model
gemini-1.5-pro
cm_max_new_tokens
128
cm_quantize
false
cm_temperature
0.1
retrieval_chain_type
stuff
inference_batch_size
16
model
RetrievalQA(combine_documents_chain=StuffDocumentsChain(llm_chain=LLMChain(prompt=PromptTemplate(input_variables=['context', 'question'], template="Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer: \n"), llm=ChatGoogleGenerativeAI(model='models/gemini-1.5-pro', temperature=0.1, client=<google.ai.generativelanguage_v1beta.services.generative_service.client.GenerativeServiceClient object at 0x2aa491510>, default_metadata=())), document_variable_name='context'), return_source_documents=True, retriever=VectorStoreRetriever(tags=['FAISS', 'HuggingFaceEmbeddings'], vectorstore=<langchain_community.vectorstores.faiss.FAISS object at 0x2d95a4a90>))