OpenAI Chat Model node

AI-Powered Research Assistant for Platform Questions with GPT-4o and MCP

Published 1 day ago

Created by

onurpolat05
Onur

Categories

Template description

Description

This workflow empowers you to effortlessly get answers to your n8n platform questions through an AI-powered assistant. Simply send your query, and the assistant will search documentation, forum posts, and example workflows to provide comprehensive, accurate responses tailored to your specific needs.

Note: This workflow uses community nodes (n8n-nodes-mcp.mcpClientTool) and will only work on self-hosted n8n instances. You'll need to install the required community nodes before importing this workflow.

!image.png

What does this workflow do?

This workflow streamlines the information retrieval process by automatically researching n8n platform documentation, community forums, and example workflows, providing you with relevant answers to your questions.

Who is this for?

  • New n8n Users: Quickly get answers to basic platform questions and learn how to use n8n effectively
  • Experienced Developers: Find solutions to specific technical issues or discover advanced workflows
  • Teams: Boost productivity by automating the research process for n8n platform questions
  • Anyone looking to leverage AI for efficient and accurate n8n platform knowledge retrieval

Benefits

  • Effortless Research: Automate the research process across n8n documentation, forum posts, and example workflows
  • AI-Powered Intelligence: Leverage the power of LLMs to understand context and generate helpful responses
  • Increased Efficiency: Save time and resources by automating the research process
  • Quick Solutions: Get immediate answers to your n8n platform questions
  • Enhanced Learning: Discover new workflows, features, and best practices to improve your n8n experience

How It Works

  1. Receive Request: The workflow starts when a chat message is received containing your n8n-related question
  2. AI Processing: The AI agent powered by OpenAI GPT-4o analyzes your question
  3. Research and Information Gathering: The system searches across multiple sources:
    • Official n8n documentation for general knowledge and how-to guides
    • Community forums for bug reports and specific issues
    • Example workflow repository for relevant implementations
  4. Response Generation: The AI agent compiles the research and generates a clear, comprehensive answer
  5. Output: The workflow provides you with the relevant information and step-by-step guidance when applicable

n8n Nodes Used

  • When chat message received (Chat Trigger)
  • OpenAI Chat Model (GPT-4o mini)
  • N8N AI Agent
  • n8n-assistant tools (MCP Client Tool - Community Node)
  • n8n-assistant execute (MCP Client Tool - Community Node)

Prerequisites

  • Self-hosted n8n instance
  • OpenAI API credentials
  • MCP client community node installed
  • MCP server configured to search n8n resources

Setup

  1. Import the workflow JSON into your n8n instance
  2. Configure the OpenAI credentials
  3. Configure your MCP client API credentials
  4. In the n8n-assistant execute node, ensure the parameter is set to "specific" (corrected from "spesific")
  5. Test the workflow by sending a message with an n8n-related question

MCP Server Connection

To connect to the MCP server that powers this assistant's research capabilities, you need to use the following URL:
https://smithery.ai/server/@onurpolat05/n8n-assistant

This MCP server is specifically designed to search across three types of n8n resources:

  1. Official documentation for general platform information and workflow creation guidance
  2. Community forums for bug-related issues and troubleshooting
  3. Example workflow repositories for reference implementations

Configure this URL in your MCP client credentials to enable the assistant to retrieve relevant information based on user queries.

This workflow combines the convenience of chat with the power of AI to provide a seamless n8n platform research experience. Start getting instant answers to your n8n questions today!

Share Template

More AI workflow templates

OpenAI Chat Model node
SerpApi (Google Search) node

AI agent chat

This workflow employs OpenAI's language models and SerpAPI to create a responsive, intelligent conversational agent. It comes equipped with manual chat triggers and memory buffer capabilities to ensure seamless interactions. To use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
Merge node
+7

Scrape and summarize webpages with AI

This workflow integrates both web scraping and NLP functionalities. It uses HTML parsing to extract links, HTTP requests to fetch essay content, and AI-based summarization using GPT-4o. It's an excellent example of an end-to-end automated task that is not only efficient but also provides real value by summarizing valuable content. Note that to use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
WhatsApp Business Cloud node
+10

Building Your First WhatsApp Chatbot

This n8n template builds a simple WhatsApp chabot acting as a Sales Agent. The Agent is backed by a product catalog vector store to better answer user's questions. This template is intended to help introduce n8n users interested in building with WhatsApp. How it works This template is in 2 parts: creating the product catalog vector store and building the WhatsApp AI chatbot. A product brochure is imported via HTTP request node and its text contents extracted. The text contents are then uploaded to the in-memory vector store to build a knowledgebase for the chatbot. A WhatsApp trigger is used to capture messages from customers where non-text messages are filtered out. The customer's message is sent to the AI Agent which queries the product catalogue using the vector store tool. The Agent's response is sent back to the user via the WhatsApp node. How to use Once you've setup and configured your WhatsApp account and credentials First, populate the vector store by clicking the "Test Workflow" button. Next, activate the workflow to enable the WhatsApp chatbot. Message your designated WhatsApp number and you should receive a message from the AI sales agent. Tweak datasource and behaviour as required. Requirements WhatsApp Business Account OpenAI for LLM Customising this workflow Upgrade the vector store to Qdrant for persistance and production use-cases. Handle different WhatsApp message types for a more rich and engaging experience for customers.
jimleuk
Jimleuk
HTTP Request node
Markdown node
+5

AI agent that can scrape webpages

⚙️🛠️🚀🤖🦾 This template is a PoC of a ReAct AI Agent capable of fetching random pages (not only Wikipedia or Google search results). On the top part there's a manual chat node connected to a LangChain ReAct Agent. The agent has access to a workflow tool for getting page content. The page content extraction starts with converting query parameters into a JSON object. There are 3 pre-defined parameters: url** – an address of the page to fetch method** = full / simplified maxlimit** - maximum length for the final page. For longer pages an error message is returned back to the agent Page content fetching is a multistep process: An HTTP Request mode tries to get the page content. If the page content was successfuly retrieved, a series of post-processing begin: Extract HTML BODY; content Remove all unnecessary tags to recude the page size Further eliminate external URLs and IMG scr values (based on the method query parameter) Remaining HTML is converted to Markdown, thus recuding the page lengh even more while preserving the basic page structure The remaining content is sent back to an Agent if it's not too long (maxlimit = 70000 by default, see CONFIG node). NB: You can isolate the HTTP Request part into a separate workflow. Check the Workflow Tool description, it guides the agent to provide a query string with several parameters instead of a JSON object. Please reach out to Eduard is you need further assistance with you n8n workflows and automations! Note that to use this template, you need to be on n8n version 1.19.4 or later.
eduard
Eduard
Merge node
Telegram node
Telegram Trigger node
+2

Telegram AI Chatbot

The workflow starts by listening for messages from Telegram users. The message is then processed, and based on its content, different actions are taken. If it's a regular chat message, the workflow generates a response using the OpenAI API and sends it back to the user. If it's a command to create an image, the workflow generates an image using the OpenAI API and sends the image to the user. If the command is unsupported, an error message is sent. Throughout the workflow, there are additional nodes for displaying notes and simulating typing actions.
eduard
Eduard
Google Drive node
Binary Input Loader node
Embeddings OpenAI node
OpenAI Chat Model node
+5

Ask questions about a PDF using AI

The workflow first populates a Pinecone index with vectors from a Bitcoin whitepaper. Then, it waits for a manual chat message. When received, the chat message is turned into a vector and compared to the vectors in Pinecone. The most similar vectors are retrieved and passed to OpenAI for generating a chat response. Note that to use this template, you need to be on n8n version 1.19.4 or later.
davidn8n
David Roberts

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon