# Chatflows

- [LangChain](/readme/chatflows/langchain.md): LangChain Agent Nodes
- [Agents](/readme/chatflows/langchain/agents.md): LangChain Agent Nodes
- [Airtable Agent](/readme/chatflows/langchain/agents/airtable-agent.md): Agent used to to answer queries on Airtable table.
- [AutoGPT](/readme/chatflows/langchain/agents/autogpt.md): Autonomous agent with chain of thoughts for self-guided task completion.
- [BabyAGI](/readme/chatflows/langchain/agents/babyagi.md): Task Driven Autonomous Agent which creates new task and reprioritizes task list based on objective
- [CSV Agent](/readme/chatflows/langchain/agents/csv-agent.md): Agent used to answer queries on CSV data.
- [Conversational Agent](/readme/chatflows/langchain/agents/conversational-agent.md): Conversational agent for a chat model. It will utilize chat specific prompts.
- [OpenAI Assistant](/readme/chatflows/langchain/agents/openai-assistant.md): An agent that uses OpenAI Assistant API to pick the tool and args to call.
- [Threads](/readme/chatflows/langchain/agents/openai-assistant/threads.md)
- [ReAct Agent Chat](/readme/chatflows/langchain/agents/react-agent-chat.md)
- [ReAct Agent LLM](/readme/chatflows/langchain/agents/react-agent-llm.md)
- [Tool Agent](/readme/chatflows/langchain/agents/tool-agent.md): Agent that uses Function Calling to pick the tools and args to call.
- [XML Agent](/readme/chatflows/langchain/agents/xml-agent.md): Agent that is designed for LLMs that are good for reasoning/writing XML (e.g: Anthropic Claude).
- [Cache](/readme/chatflows/langchain/cache.md): LangChain Cache Nodes
- [InMemory Cache](/readme/chatflows/langchain/cache/in-memory-cache.md): Caches LLM response in local memory, will be cleared when app is restarted.
- [InMemory Embedding Cache](/readme/chatflows/langchain/cache/inmemory-embedding-cache.md): Cache generated Embeddings in memory to avoid needing to recompute them.
- [Momento Cache](/readme/chatflows/langchain/cache/momento-cache.md): Cache LLM response using Momento, a distributed, serverless cache.
- [Redis Cache](/readme/chatflows/langchain/cache/redis-cache.md): Cache LLM response in Redis, useful for sharing cache across multiple processes or servers.
- [Redis Embeddings Cache](/readme/chatflows/langchain/cache/redis-embeddings-cache.md): Cache LLM response in Redis, useful for sharing cache across multiple processes or servers.
- [Upstash Redis Cache](/readme/chatflows/langchain/cache/upstash-redis-cache.md): Cache LLM response in Upstash Redis, serverless data for Redis and Kafka.
- [Chains](/readme/chatflows/langchain/chains.md): LangChain Chain Nodes
- [GET API Chain](/readme/chatflows/langchain/chains/get-api-chain.md): Chain to run queries against GET API.
- [OpenAPI Chain](/readme/chatflows/langchain/chains/openapi-chain.md): Chain that automatically select and call APIs based only on an OpenAPI spec.
- [POST API Chain](/readme/chatflows/langchain/chains/post-api-chain.md): Chain to run queries against POST API.
- [Conversation Chain](/readme/chatflows/langchain/chains/conversation-chain.md): Chat models specific conversational chain with memory.
- [Conversational Retrieval QA Chain](/readme/chatflows/langchain/chains/conversational-retrieval-qa-chain.md)
- [LLM Chain](/readme/chatflows/langchain/chains/llm-chain.md): Chain to run queries against LLMs.
- [Multi Prompt Chain](/readme/chatflows/langchain/chains/multi-prompt-chain.md): Chain automatically picks an appropriate prompt from multiple prompt templates.
- [Multi Retrieval QA Chain](/readme/chatflows/langchain/chains/multi-retrieval-qa-chain.md): QA Chain that automatically picks an appropriate vector store from multiple retrievers.
- [Retrieval QA Chain](/readme/chatflows/langchain/chains/retrieval-qa-chain.md): QA chain to answer a question based on the retrieved documents.
- [Sql Database Chain](/readme/chatflows/langchain/chains/sql-database-chain.md): Answer questions over a SQL database.
- [Vectara QA Chain](/readme/chatflows/langchain/chains/vectara-chain.md)
- [VectorDB QA Chain](/readme/chatflows/langchain/chains/vectordb-qa-chain.md): QA chain for vector databases.
- [Chat Models](/readme/chatflows/langchain/chat-models.md): LangChain Chat Model Nodes
- [AWS ChatBedrock](/readme/chatflows/langchain/chat-models/aws-chatbedrock.md): Wrapper around AWS Bedrock large language models that use the Chat endpoint.
- [Azure ChatOpenAI](/readme/chatflows/langchain/chat-models/azure-chatopenai-1.md)
- [NIBittensorChat](/readme/chatflows/langchain/chat-models/nibittensorchat.md): Wrapper around Bittensor subnet 1 large language models.
- [ChatAnthropic](/readme/chatflows/langchain/chat-models/chatanthropic.md): Wrapper around ChatAnthropic large language models that use the Chat endpoint.
- [ChatCohere](/readme/chatflows/langchain/chat-models/chatcohere.md): Wrapper around Cohere Chat Endpoints.
- [Chat Fireworks](/readme/chatflows/langchain/chat-models/chat-fireworks.md): Wrapper around Fireworks Chat Endpoints.
- [ChatGoogleGenerativeAI](/readme/chatflows/langchain/chat-models/google-ai.md)
- [ChatGooglePaLM](/readme/chatflows/langchain/chat-models/chatgooglepalm.md): Wrapper around Google MakerSuite PaLM large language models using the Chat endpoint.
- [Google VertexAI](/readme/chatflows/langchain/chat-models/google-vertexai.md)
- [ChatHuggingFace](/readme/chatflows/langchain/chat-models/chathuggingface.md): Wrapper around HuggingFace large language models.
- [ChatMistralAI](/readme/chatflows/langchain/chat-models/mistral-ai.md)
- [ChatOllama](/readme/chatflows/langchain/chat-models/chatollama.md)
- [ChatOllama Funtion](/readme/chatflows/langchain/chat-models/chatollama-funtion.md): Run open-source function-calling compatible LLM on Ollama.
- [ChatOpenAI](/readme/chatflows/langchain/chat-models/azure-chatopenai.md)
- [ChatOpenAI Custom](/readme/chatflows/langchain/chat-models/chatopenai-custom.md): Custom/FineTuned model using OpenAI Chat compatible API.
- [ChatTogetherAI](/readme/chatflows/langchain/chat-models/chattogetherai.md): Wrapper around TogetherAI large language models
- [GroqChat](/readme/chatflows/langchain/chat-models/groqchat.md): Wrapper around Groq API with LPU Inference Engine.
- [Document Loaders](/readme/chatflows/langchain/document-loaders.md): LangChain Document Loader Nodes
- [API Loader](/readme/chatflows/langchain/document-loaders/api-loader.md): Load data from an API.
- [Airtable](/readme/chatflows/langchain/document-loaders/airtable.md): Load data from Airtable table.
- [Apify Website Content Crawler](/readme/chatflows/langchain/document-loaders/apify-website-content-crawler.md): Load data from Apify Website Content Crawler.
- [Cheerio Web Scraper](/readme/chatflows/langchain/document-loaders/cheerio-web-scraper.md)
- [Confluence](/readme/chatflows/langchain/document-loaders/confluence.md): Load data from a Confluence Document
- [Csv File](/readme/chatflows/langchain/document-loaders/csv-file.md): Load data from CSV files.
- [Custom Document Loader](/readme/chatflows/langchain/document-loaders/custom-document-loader.md): Custom function for loading documents.
- [Document Store](/readme/chatflows/langchain/document-loaders/document-store.md): Load data from pre-configured document stores.
- [Docx File](/readme/chatflows/langchain/document-loaders/docx-file.md): Load data from DOCX files.
- [Figma](/readme/chatflows/langchain/document-loaders/figma.md): Load data from a Figma file.
- [FireCrawl](/readme/chatflows/langchain/document-loaders/firecrawl.md): Load data from URL using FireCrawl.
- [Folder with Files](/readme/chatflows/langchain/document-loaders/folder-with-files.md): Load data from folder with multiple files.
- [GitBook](/readme/chatflows/langchain/document-loaders/gitbook.md): Load data from GitBook.
- [Github](/readme/chatflows/langchain/document-loaders/github.md): Load data from a GitHub repository.
- [Json File](/readme/chatflows/langchain/document-loaders/json-file.md): Load data from JSON files.
- [Json Lines File](/readme/chatflows/langchain/document-loaders/json-lines-file.md): Load data from JSON Lines files.
- [Notion Database](/readme/chatflows/langchain/document-loaders/notion-database.md): Load data from Notion Database (each row is a separate document with all properties as metadata).
- [Notion Folder](/readme/chatflows/langchain/document-loaders/notion-folder.md): Load data from the exported and unzipped Notion folder.
- [Notion Page](/readme/chatflows/langchain/document-loaders/notion-page.md): Load data from Notion Page (including child pages all as separate documents).
- [PDF Files](/readme/chatflows/langchain/document-loaders/pdf-file.md)
- [Plain Text](/readme/chatflows/langchain/document-loaders/plain-text.md): Load data from plain text.
- [Playwright Web Scraper](/readme/chatflows/langchain/document-loaders/playwright-web-scraper.md)
- [Puppeteer Web Scraper](/readme/chatflows/langchain/document-loaders/puppeteer-web-scraper.md)
- [AWS S3 File Loader](/readme/chatflows/langchain/document-loaders/s3-file-loader.md)
- [SearchApi For Web Search](/readme/chatflows/langchain/document-loaders/searchapi-for-web-search.md): Load data from real-time search results.
- [SerpApi For Web Search](/readme/chatflows/langchain/document-loaders/serpapi-for-web-search.md): Load and process data from web search results.
- [Spider Web Scraper/Crawler](/readme/chatflows/langchain/document-loaders/spider-web-scraper-crawler.md): Scrape & Crawl the web with Spider.
- [Text File](/readme/chatflows/langchain/document-loaders/text-file.md): Load data from text files.
- [Unstructured File Loader](/readme/chatflows/langchain/document-loaders/unstructured-file-loader.md): Use Unstructured.io to load data from a file path.
- [Unstructured Folder Loader](/readme/chatflows/langchain/document-loaders/unstructured-folder-loader.md): Use Unstructured.io to load data from a folder. Note: Currently doesn't support .png and .heic until unstructured is updated.
- [VectorStore To Document](/readme/chatflows/langchain/document-loaders/vectorstore-to-document.md): Search documents with scores from vector store.
- [Embeddings](/readme/chatflows/langchain/embeddings.md): LangChain Embedding Nodes
- [AWS Bedrock Embeddings](/readme/chatflows/langchain/embeddings/aws-bedrock-embeddings.md): AWSBedrock embedding models to generate embeddings for a given text.
- [Azure OpenAI Embeddings](/readme/chatflows/langchain/embeddings/azure-openai-embeddings.md)
- [Cohere Embeddings](/readme/chatflows/langchain/embeddings/cohere-embeddings.md): Cohere API to generate embeddings for a given text
- [Google GenerativeAI Embeddings](/readme/chatflows/langchain/embeddings/googlegenerativeai-embeddings.md): Google Generative API to generate embeddings for a given text.
- [Google PaLM Embeddings](/readme/chatflows/langchain/embeddings/google-palm-embeddings.md): Google MakerSuite PaLM API to generate embeddings for a given text.
- [Google VertexAI Embeddings](/readme/chatflows/langchain/embeddings/googlevertexai-embeddings.md): Google vertexAI API to generate embeddings for a given text.
- [HuggingFace Inference Embeddings](/readme/chatflows/langchain/embeddings/huggingface-inference-embeddings.md): HuggingFace Inference API to generate embeddings for a given text.
- [MistralAI Embeddings](/readme/chatflows/langchain/embeddings/mistralai-embeddings.md): MistralAI API to generate embeddings for a given text.
- [Ollama Embeddings](/readme/chatflows/langchain/embeddings/ollama-embeddings.md): Generate embeddings for a given text using open source model on Ollama.
- [OpenAI Embeddings](/readme/chatflows/langchain/embeddings/openai-embeddings.md): OpenAI API to generate embeddings for a given text.
- [OpenAI Embeddings Custom](/readme/chatflows/langchain/embeddings/openai-embeddings-custom.md): OpenAI API to generate embeddings for a given text.
- [TogetherAI Embedding](/readme/chatflows/langchain/embeddings/togetherai-embedding.md): TogetherAI Embedding models to generate embeddings for a given text.
- [VoyageAI Embeddings](/readme/chatflows/langchain/embeddings/voyageai-embeddings.md): Voyage AI API to generate embeddings for a given text.
- [LLMs](/readme/chatflows/langchain/llms.md): LangChain LLM Nodes
- [AWS Bedrock](/readme/chatflows/langchain/llms/aws-bedrock.md): Wrapper around AWS Bedrock large language models.
- [Azure OpenAI](/readme/chatflows/langchain/llms/azure-openai.md): Wrapper around Azure OpenAI large language models.
- [NIBittensorLLM](/readme/chatflows/langchain/llms/nibittensorllm.md): Wrapper around Bittensor subnet 1 large language models.
- [Cohere](/readme/chatflows/langchain/llms/cohere.md): Wrapper around Cohere large language models.
- [GooglePaLM](/readme/chatflows/langchain/llms/googlepalm.md): Wrapper around Google MakerSuite PaLM large language models.
- [GoogleVertex AI](/readme/chatflows/langchain/llms/googlevertex-ai.md): Wrapper around GoogleVertexAI large language models.
- [HuggingFace Inference](/readme/chatflows/langchain/llms/huggingface-inference.md): Wrapper around HuggingFace large language models.
- [Ollama](/readme/chatflows/langchain/llms/ollama.md): Wrapper around open source large language models on Ollama.
- [OpenAI](/readme/chatflows/langchain/llms/openai.md): Wrapper around OpenAI large language models.
- [Replicate](/readme/chatflows/langchain/llms/replicate.md): Use Replicate to run open source models on cloud.
- [Memory](/readme/chatflows/langchain/memory.md): LangChain Memory Nodes
- [Buffer Memory](/readme/chatflows/langchain/memory/buffer-memory.md)
- [Buffer Window Memory](/readme/chatflows/langchain/memory/buffer-window-memory.md)
- [Conversation Summary Memory](/readme/chatflows/langchain/memory/conversation-summary-memory.md)
- [Conversation Summary Buffer Memory](/readme/chatflows/langchain/memory/conversation-summary-buffer-memory.md)
- [DynamoDB Chat Memory](/readme/chatflows/langchain/memory/dynamodb-chat-memory.md): Stores the conversation in dynamo db table.
- [MongoDB Atlas Chat Memory](/readme/chatflows/langchain/memory/mongodb-atlas-chat-memory.md): Stores the conversation in MongoDB Atlas.
- [Redis-Backed Chat Memory](/readme/chatflows/langchain/memory/redis-backed-chat-memory.md): Summarizes the conversation and stores the memory in Redis server.
- [Upstash Redis-Backed Chat Memory](/readme/chatflows/langchain/memory/upstash-redis-backed-chat-memory.md): Summarizes the conversation and stores the memory in Upstash Redis server.
- [Moderation](/readme/chatflows/langchain/moderation.md): LangChain Moderation Nodes
- [OpenAI Moderation](/readme/chatflows/langchain/moderation/openai-moderation.md): Check whether content complies with OpenAI usage policies.
- [Simple Prompt Moderation](/readme/chatflows/langchain/moderation/simple-prompt-moderation.md): Check whether input consists of any text from Deny list, and prevent being sent to LLM.
- [Output Parsers](/readme/chatflows/langchain/output-parsers.md): LangChain Output Parser Nodes
- [CSV Output Parser](/readme/chatflows/langchain/output-parsers/csv-output-parser.md): Parse the output of an LLM call as a comma-separated list of values.
- [Custom List Output Parser](/readme/chatflows/langchain/output-parsers/custom-list-output-parser.md): Parse the output of an LLM call as a list of values.
- [Structured Output Parser](/readme/chatflows/langchain/output-parsers/structured-output-parser.md): Parse the output of an LLM call into a given (JSON) structure.
- [Advanced Structured Output Parser](/readme/chatflows/langchain/output-parsers/advanced-structured-output-parser.md): Parse the output of an LLM call into a given structure by providing a Zod schema.
- [Prompts](/readme/chatflows/langchain/prompts.md): LangChain Prompt Nodes
- [Chat Prompt Template](/readme/chatflows/langchain/prompts/chat-prompt-template.md): Schema to represent a chat prompt.
- [Few Shot Prompt Template](/readme/chatflows/langchain/prompts/few-shot-prompt-template.md): Prompt template you can build with examples.
- [Prompt Template](/readme/chatflows/langchain/prompts/prompt-template.md): Schema to represent a basic prompt for an LLM.
- [Record Managers](/readme/chatflows/langchain/record-managers.md): LangChain Record Manager Nodes
- [Retrievers](/readme/chatflows/langchain/retrievers.md): LangChain Retriever Nodes
- [Cohere Rerank Retriever](/readme/chatflows/langchain/retrievers/cohere-rerank-retriever.md): Cohere Rerank indexes the documents from most to least semantically relevant to the query.
- [Embeddings Filter Retriever](/readme/chatflows/langchain/retrievers/embeddings-filter-retriever.md): A document compressor that uses embeddings to drop documents unrelated to the query.
- [HyDE Retriever](/readme/chatflows/langchain/retrievers/hyde-retriever.md): Use HyDE retriever to retrieve from a vector store.
- [LLM Filter Retriever](/readme/chatflows/langchain/retrievers/llm-filter-retriever.md): Iterate over the initially returned documents and extract, from each, only the content that is relevant to the query.
- [Multi Query Retriever](/readme/chatflows/langchain/retrievers/multi-query-retriever.md): Generate multiple queries from different perspectives for a given user input query.
- [Prompt Retriever](/readme/chatflows/langchain/retrievers/prompt-retriever.md): Store prompt template with name & description to be later queried by MultiPromptChain.
- [Reciprocal Rank Fusion Retriever](/readme/chatflows/langchain/retrievers/reciprocal-rank-fusion-retriever.md): Reciprocal Rank Fusion to re-rank search results by multiple query generation.
- [Similarity Score Threshold Retriever](/readme/chatflows/langchain/retrievers/similarity-score-threshold-retriever.md): Return results based on the minimum similarity percentage.
- [Vector Store Retriever](/readme/chatflows/langchain/retrievers/vector-store-retriever.md): Store vector store as retriever to be later queried by MultiRetrievalQAChain.
- [Voyage AI Rerank Retriever](/readme/chatflows/langchain/retrievers/page.md): Voyage AI Rerank indexes the documents from most to least semantically relevant to the query.
- [Text Splitters](/readme/chatflows/langchain/text-splitters.md): LangChain Text Splitter Nodes
- [Character Text Splitter](/readme/chatflows/langchain/text-splitters/character-text-splitter.md): Splits only on one type of character (defaults to "\n\n").
- [Code Text Splitter](/readme/chatflows/langchain/text-splitters/code-text-splitter.md): Split documents based on language-specific syntax.
- [Html-To-Markdown Text Splitter](/readme/chatflows/langchain/text-splitters/html-to-markdown-text-splitter.md): Converts Html to Markdown and then split your content into documents based on the Markdown headers.
- [Markdown Text Splitter](/readme/chatflows/langchain/text-splitters/markdown-text-splitter.md): Split your content into documents based on the Markdown headers.
- [Recursive Character Text Splitter](/readme/chatflows/langchain/text-splitters/recursive-character-text-splitter.md): Split documents recursively by different characters - starting with "\n\n", then "\n", then " ".
- [Token Text Splitter](/readme/chatflows/langchain/text-splitters/token-text-splitter.md): Splits a raw text string by first converting the text into BPE tokens, then split these tokens into chunks and convert the tokens within a single chunk back into text.
- [Tools](/readme/chatflows/langchain/tools.md): LangChain Tool Nodes
- [BraveSearch API](/readme/chatflows/langchain/tools/bravesearch-api.md): Wrapper around BraveSearch API - a real-time API to access Brave search results.
- [Calculator](/readme/chatflows/langchain/tools/calculator.md): Perform calculations on response.
- [Chain Tool](/readme/chatflows/langchain/tools/chain-tool.md): Use a chain as allowed tool for agent.
- [Chatflow Tool](/readme/chatflows/langchain/tools/chatflow-tool.md): Execute another chatflow and get the response.
- [Custom Tool](/readme/chatflows/langchain/tools/custom-tool.md)
- [Exa Search](/readme/chatflows/langchain/tools/exa-search.md): Wrapper around Exa Search API - search engine fully designed for use by LLMs.
- [Google Custom Search](/readme/chatflows/langchain/tools/google-custom-search.md): Wrapper around Google Custom Search API - a real-time API to access Google search results.
- [OpenAPI Toolkit](/readme/chatflows/langchain/tools/openapi-toolkit.md): Load OpenAPI specification.
- [Python Interpreter](/readme/chatflows/langchain/tools/python-interpreter.md): Execute python code in Pyodide sandbox environment.
- [Read File](/readme/chatflows/langchain/tools/read-file.md): Read file from disk.
- [Request Get](/readme/chatflows/langchain/tools/request-get.md): Execute HTTP GET requests.
- [Request Post](/readme/chatflows/langchain/tools/request-post.md): Execute HTTP POST requests.
- [Retriever Tool](/readme/chatflows/langchain/tools/retriever-tool.md): Use a retriever as allowed tool for agent.
- [SearchApi](/readme/chatflows/langchain/tools/searchapi.md): Real-time API for accessing Google Search data.
- [SearXNG](/readme/chatflows/langchain/tools/searxng.md): Wrapper around SearXNG - a free internet metasearch engine.
- [Serp API](/readme/chatflows/langchain/tools/serp-api.md): Wrapper around SerpAPI - a real-time API to access Google search results.
- [Serper](/readme/chatflows/langchain/tools/serper.md): Wrapper around Serper.dev - Google Search API.
- [Web Browser](/readme/chatflows/langchain/tools/web-browser.md): Gives agent the ability to visit a website and extract information.
- [Write File](/readme/chatflows/langchain/tools/write-file.md): Write file to disk.
- [Vector Stores](/readme/chatflows/langchain/vector-stores.md): LangChain Vector Store Nodes
- [AstraDB](/readme/chatflows/langchain/vector-stores/astradb.md)
- [Chroma](/readme/chatflows/langchain/vector-stores/chroma.md)
- [Elastic](/readme/chatflows/langchain/vector-stores/elastic.md)
- [Faiss](/readme/chatflows/langchain/vector-stores/faiss.md): Upsert embedded data and perform similarity search upon query using Faiss library from Meta.
- [In-Memory Vector Store](/readme/chatflows/langchain/vector-stores/in-memory-vector-store.md): In-memory vectorstore that stores embeddings and does an exact, linear search for the most similar embeddings.
- [Milvus](/readme/chatflows/langchain/vector-stores/milvus.md): Upsert embedded data and perform similarity search upon query using Milvus, world's most advanced open-source vector database.
- [MongoDB Atlas](/readme/chatflows/langchain/vector-stores/mongodb-atlas.md): Upsert embedded data and perform similarity or mmr search upon query using MongoDB Atlas, a managed cloud mongodb database.
- [OpenSearch](/readme/chatflows/langchain/vector-stores/opensearch.md): Upsert embedded data and perform similarity search upon query using OpenSearch, an open-source, all-in-one vector database.
- [Pinecone](/readme/chatflows/langchain/vector-stores/pinecone.md): Upsert embedded data and perform similarity search upon query using Pinecone, a leading fully managed hosted vector database.
- [Postgres](/readme/chatflows/langchain/vector-stores/postgres.md): Upsert embedded data and perform similarity search upon query using pgvector on Postgres.
- [Qdrant](/readme/chatflows/langchain/vector-stores/qdrant.md)
- [Redis](/readme/chatflows/langchain/vector-stores/redis.md)
- [SingleStore](/readme/chatflows/langchain/vector-stores/singlestore.md)
- [Supabase](/readme/chatflows/langchain/vector-stores/supabase.md)
- [Upstash Vector](/readme/chatflows/langchain/vector-stores/upstash-vector.md)
- [Vectara](/readme/chatflows/langchain/vector-stores/vectara.md)
- [Weaviate](/readme/chatflows/langchain/vector-stores/weaviate.md): Upsert embedded data and perform similarity or mmr search using Weaviate, a scalable open-source vector database.
- [Zep Collection - Open Source](/readme/chatflows/langchain/vector-stores/zep-collection-open-source.md): Upsert embedded data and perform similarity or mmr search upon query using Zep, a fast and scalable building block for LLM apps.
- [Zep Collection - Cloud](/readme/chatflows/langchain/vector-stores/zep-collection-cloud.md): Upsert embedded data and perform similarity or mmr search upon query using Zep, a fast and scalable building block for LLM apps.
- [LlamaIndex](/readme/chatflows/llamaindex.md): Learn how Tailwinds integrates with the LlamaIndex framework
- [Agents](/readme/chatflows/llamaindex/agents.md): LlamaIndex Agent Nodes
- [OpenAI Tool Agent](/readme/chatflows/llamaindex/agents/openai-tool-agent.md): Agent that uses OpenAI Function Calling to pick the tools and args to call using LlamaIndex.
- [Anthropic Tool Agent](/readme/chatflows/llamaindex/agents/openai-tool-agent-1.md): Agent that uses Anthropic Function Calling to pick the tools and args to call using LlamaIndex.
- [Chat Models](/readme/chatflows/llamaindex/chat-models.md): LlamaIndex Chat Model Nodes
- [AzureChatOpenAI](/readme/chatflows/llamaindex/chat-models/azurechatopenai.md): Wrapper around Azure OpenAI Chat LLM specific for LlamaIndex.
- [ChatAnthropic](/readme/chatflows/llamaindex/chat-models/chatanthropic.md): Wrapper around ChatAnthropic LLM specific for LlamaIndex.
- [ChatMistral](/readme/chatflows/llamaindex/chat-models/chatmistral.md): Wrapper around ChatMistral LLM specific for LlamaIndex.
- [ChatOllama](/readme/chatflows/llamaindex/chat-models/chatollama.md): Wrapper around ChatOllama LLM specific for LlamaIndex.
- [ChatOpenAI](/readme/chatflows/llamaindex/chat-models/chatopenai.md): Wrapper around OpenAI Chat LLM specific for LlamaIndex.
- [ChatTogetherAI](/readme/chatflows/llamaindex/chat-models/chattogetherai.md): Wrapper around ChatTogetherAI LLM specific for LlamaIndex.
- [ChatGroq](/readme/chatflows/llamaindex/chat-models/chatgroq.md): Wrapper around Groq LLM specific for LlamaIndex.
- [Embeddings](/readme/chatflows/llamaindex/embeddings.md): LlamaIndex Embeddings Nodes
- [Azure OpenAI Embeddings](/readme/chatflows/llamaindex/embeddings/azure-openai-embeddings.md): Azure OpenAI API embeddings specific for LlamaIndex.
- [OpenAI Embedding](/readme/chatflows/llamaindex/embeddings/openai-embedding.md): OpenAI Embedding specific for LlamaIndex.
- [Engine](/readme/chatflows/llamaindex/engine.md): LlamaIndex Engine Nodes
- [Query Engine](/readme/chatflows/llamaindex/engine/query-engine.md)
- [Simple Chat Engine](/readme/chatflows/llamaindex/engine/simple-chat-engine.md)
- [Context Chat Engine](/readme/chatflows/llamaindex/engine/context-chat-engine.md)
- [Sub-Question Query Engine](/readme/chatflows/llamaindex/engine/sub-question-query-engine.md)
- [Response Synthesizer](/readme/chatflows/llamaindex/response-synthesizer.md): LlamaIndex Response Synthesizer Nodes
- [Refine](/readme/chatflows/llamaindex/response-synthesizer/refine.md)
- [Compact And Refine](/readme/chatflows/llamaindex/response-synthesizer/compact-and-refine.md)
- [Simple Response Builder](/readme/chatflows/llamaindex/response-synthesizer/simple-response-builder.md)
- [Tree Summarize](/readme/chatflows/llamaindex/response-synthesizer/tree-summarize.md)
- [Tools](/readme/chatflows/llamaindex/tools.md): LlamaIndex Agent Nodes
- [Query Engine Tool](/readme/chatflows/llamaindex/tools/query-engine-tool.md)
- [Vector Stores](/readme/chatflows/llamaindex/vector-stores.md): LlamaIndex Vector Store Nodes
- [Pinecone](/readme/chatflows/llamaindex/vector-stores/pinecone.md): Upsert embedded data and perform similarity search upon query using Pinecone, a leading fully managed hosted vector database.
- [SimpleStore](/readme/chatflows/llamaindex/vector-stores/queryengine-tool.md): Upsert embedded data to local path and perform similarity search.
