Back to Integrations
integrationHTTP Request node
integrationSupabase node

HTTP Request and Supabase integration

Save yourself the work of writing custom integrations for HTTP Request and Supabase and use n8n instead. Build adaptable and scalable Development, Core Nodes, and Data & Storage workflows that work with your technology stack. All within a building experience you will love.

How to connect HTTP Request and Supabase

  • Step 1: Create a new workflow
  • Step 2: Add and configure nodes
  • Step 3: Connect
  • Step 4: Customize and extend your integration
  • Step 5: Test and activate your workflow

Step 1: Create a new workflow and add the first step

In n8n, click the "Add workflow" button in the Workflows tab to create a new workflow. Add the starting point – a trigger on when your workflow should run: an app event, a schedule, a webhook call, another workflow, an AI chat, or a manual trigger. Sometimes, the HTTP Request node might already serve as your starting point.

HTTP Request and Supabase integration: Create a new workflow and add the first step

Step 2: Add and configure HTTP Request and Supabase nodes

You can find HTTP Request and Supabase in the nodes panel. Drag them onto your workflow canvas, selecting their actions. Click each node, choose a credential, and authenticate to grant n8n access. Configure HTTP Request and Supabase nodes one by one: input data on the left, parameters in the middle, and output data on the right.

HTTP Request and Supabase integration: Add and configure HTTP Request and Supabase nodes

Step 3: Connect HTTP Request and Supabase

A connection establishes a link between HTTP Request and Supabase (or vice versa) to route data through the workflow. Data flows from the output of one node to the input of another. You can have single or multiple connections for each node.

HTTP Request and Supabase integration: Connect HTTP Request and Supabase

Step 4: Customize and extend your HTTP Request and Supabase integration

Use n8n's core nodes such as If, Split Out, Merge, and others to transform and manipulate data. Write custom JavaScript or Python in the Code node and run it as a step in your workflow. Connect HTTP Request and Supabase with any of n8n’s 1000+ integrations, and incorporate advanced AI logic into your workflows.

HTTP Request and Supabase integration: Customize and extend your HTTP Request and Supabase integration

Step 5: Test and activate your HTTP Request and Supabase workflow

Save and run the workflow to see if everything works as expected. Based on your configuration, data should flow from HTTP Request to Supabase or vice versa. Easily debug your workflow: you can check past executions to isolate and fix the mistake. Once you've tested everything, make sure to save your workflow and activate it.

HTTP Request and Supabase integration: Test and activate your HTTP Request and Supabase workflow

AI agent to chat with files in Supabase Storage

Video Guide

I prepared a detailed guide explaining how to set up and implement this scenario, enabling you to chat with your documents stored in Supabase using n8n.

Youtube Link

Who is this for?
This workflow is ideal for researchers, analysts, business owners, or anyone managing a large collection of documents. It's particularly beneficial for those who need quick contextual information retrieval from text-heavy files stored in Supabase, without needing additional services like Google Drive.

What problem does this workflow solve?
Manually retrieving and analyzing specific information from large document repositories is time-consuming and inefficient. This workflow automates the process by vectorizing documents and enabling AI-powered interactions, making it easy to query and retrieve context-based information from uploaded files.

What this workflow does
The workflow integrates Supabase with an AI-powered chatbot to process, store, and query text and PDF files. The steps include:
Fetching and comparing files to avoid duplicate processing.
Handling file downloads and extracting content based on the file type.
Converting documents into vectorized data for contextual information retrieval.
Storing and querying vectorized data from a Supabase vector store.

File Extraction and Processing: Automates handling of multiple file formats (e.g., PDFs, text files), and extracts document content.
Vectorized Embeddings Creation: Generates embeddings for processed data to enable AI-driven interactions.
Dynamic Data Querying: Allows users to query their document repository conversationally using a chatbot.

Setup

N8N Workflow
Fetch File List from Supabase:
Use Supabase to retrieve the stored file list from a specified bucket.
Add logic to manage empty folder placeholders returned by Supabase, avoiding incorrect processing.

Compare and Filter Files:
Aggregate the files retrieved from storage and compare them to the existing list in the Supabase files table.
Exclude duplicates and skip placeholder files to ensure only unprocessed files are handled.

Handle File Downloads:
Download new files using detailed storage configurations for public/private access.
Adjust the storage settings and GET requests to match your Supabase setup.

File Type Processing:
Use a Switch node to target specific file types (e.g., PDFs or text files).
Employ relevant tools to process the content:
For PDFs, extract embedded content.
For text files, directly process the text data.

Content Chunking:
Break large text data into smaller chunks using the Text Splitter node.
Define chunk size (default: 500 tokens) and overlap to retain necessary context across chunks.

Vector Embedding Creation:
Generate vectorized embeddings for the processed content using OpenAI's embedding tools.
Ensure metadata, such as file ID, is included for easy data retrieval.

Store Vectorized Data:
Save the vectorized information into a dedicated Supabase vector store.
Use the default schema and table provided by Supabase for seamless setup.

AI Chatbot Integration:
Add a chatbot node to handle user input and retrieve relevant document chunks.
Use metadata like file ID for targeted queries, especially when multiple documents are involved.

Testing
Upload sample files to your Supabase bucket.
Verify if files are processed and stored successfully in the vector store.
Ask simple conversational questions about your documents using the chatbot (e.g., "What does Chapter 1 say about the Roman Empire?").
Test for accuracy and contextual relevance of retrieved results.

Nodes used in this workflow

Popular HTTP Request and Supabase workflows

Build a Knowledge Base Chatbot with Jotform, RAG Supabase, Together AI & Gemini

Youtube Video: https://youtu.be/dEtV7OYuMFQ?si=fOAlZWz4aDuFFovH Workflow Pre-requisites Step 1: Supabase Setup First, replace the keys in the "Save the embedding in DB" & "Search Embeddings" nodes with your new Supabase keys. After that, run the following code snippets in your Supabase SQL editor: Create the table to store chunks and embeddings: CREATE TABLE public."RAG" ( id bigserial PRIMARY KEY, chunk text NULL, embeddings vector(1024) NULL ) TABLESPACE pg_default; Create a function to match embeddings: DROP FUNCTION IF EXISTS public.matchembeddings1(integer, vector); CREATE OR REPLACE FUNCTION public.matchembeddings1( match_count integer, query_embedding vector ) RETURNS TABLE ( chunk text, similarity float ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT R.chunk, 1 - (R.embeddings <=> query_embedding) AS similarity FROM public."RAG" AS R ORDER BY R.embeddings <=> query_embedding LIMIT match_count; END; $$; Step 2: Create Jotform with these fields Your full name email address Upload PDF Document [field where you upload the knowledgebase in PDF] Step 3: Get Together AI API Key Get a Together AI API key and paste it into the "Embedding Uploaded document" node and the "Embed User Message" node. Here is a detailed, node-by-node explanation of the n8n workflow, which is divided into two main parts. Part 1: Ingesting Knowledge from a PDF This first sequence of nodes runs when you submit a PDF through a Jotform. Its purpose is to read the document, process its content, and save it in a specialized database for the AI to use later. JotForm Trigger Type: Trigger What it does: This node starts the entire workflow. It's configured to listen for new submissions on a specific Jotform. When someone uploads a file and submits the form, this node activates and passes the submission data to the next step. Grab New knowledgebase Type: HTTP Request What it does: The initial trigger from Jotform only contains basic information. This node makes a follow-up call to the Jotform API using the submissionID to get the complete details of that submission, including the specific link to the uploaded file. Grab the uploaded knowledgebase file link Type: HTTP Request What it does: Using the file link obtained from the previous node, this step downloads the actual PDF file. It's set to receive the response as a file, not as text. Extract Text from PDF File Type: Extract From File What it does: This utility node takes the binary PDF file downloaded in the previous step and extracts all the readable text content from it. The output is a single block of plain text. Splitting into Chunks Type: Code What it does: This node runs a small JavaScript snippet. It takes the large block of text from the PDF and chops it into smaller, more manageable pieces, or "chunks," each of a predefined length. This is critical because AI models work more effectively with smaller, focused pieces of text. Embedding Uploaded document Type: HTTP Request What it does: This is a key AI step. It sends each individual text chunk to an embeddings API. A specified AI model converts the semantic meaning of the chunk into a numerical list called an embedding or vector. This vector is like a mathematical fingerprint of the text's meaning. Save the embedding in DB Type: Supabase What it does: This node connects to your Supabase database. For every chunk, it creates a new row in a specified table and stores two important pieces of information: the original text chunk and its corresponding numerical embedding (its "fingerprint") from the previous step. Part 2: Answering Questions via Chat This second sequence starts when a user sends a message. It uses the knowledge stored in the database to find relevant information and generate an intelligent answer. When chat message received Type: Chat Trigger What it does: This node starts the second part of the workflow. It listens for any incoming message from a user in a connected chat application. Embend User Message Type: HTTP Request What it does: This node takes the user's question and sends it to the exact same embeddings API and model used in Part 1. This converts the question's meaning into the same kind of numerical vector or "fingerprint." Search Embeddings Type: HTTP Request What it does: This is the "retrieval" step. It calls a custom database function in Supabase. It sends the question's embedding to this function and asks it to search the knowledge base table to find a specified number of top text chunks whose embeddings are mathematically most similar to the question's embedding. Aggregate Type: Aggregate What it does: The search from the previous step returns multiple separate items. This utility node simply bundles those items into a single, combined piece of data. This makes it easier to feed all the context into the final AI model at once. AI Agent & Google Gemini Chat Model Type: LangChain Agent & AI Model What it does: This is the "generation" step where the final answer is created. The AI Agent node is given a detailed set of instructions (a prompt). The prompt tells the Google Gemini Chat Model to act as a professional support agent. Crucially, it provides the AI with the user's original question and the aggregated text chunks from the Aggregate node as its only source of truth. It then instructs the AI to formulate an answer based only on that provided context, format it for a specific chat style, and to say "I don't know" if the answer cannot be found in the chunks. This prevents the AI from making things up.

RAG Chatbot with Supabase + TogetherAI + Openrouter

⚠️ RUN the FIRST WORKFLOW ONLY ONCE (as it will convert your content in Embedding format and save it in DB and is ready for the RAG Chat) 📌 Telegram Trigger Type:** telegramTrigger Purpose:** Waits for new Telegram messages to trigger the workflow. Note:** Currently disabled. 📄 Content for the Training Type:** googleDocs Purpose:** Fetches document content from Google Docs using its URL. Details:** Uses Service Account authentication. ✂️ Splitting into Chunks Type:** code Purpose:** Splits the fetched document text into smaller chunks (1000 chars each) for processing. Logic:** Loops over text and slices it. 🧠 Embedding Uploaded Document Type:** httpRequest Purpose:** Calls Together AI embedding API to get vector embeddings for each text chunk. Details:** Sends JSON with model name and chunk as input. 🛢 Save the embedding in DB Type:** supabase Purpose:** Saves each text chunk and its embedding vector into the Supabase embed table. SECOND WORKFLOW EXPLAINATION: 💬 When chat message received Type:** chatTrigger Purpose:** Starts the workflow when a user sends a chat message. Details:** Sends an initial greeting message to the user. 🧩 Embend User Message Type:** httpRequest Purpose:** Generates embedding for the user’s input message. Details:** Calls Together AI embeddings API. 🔍 Search Embeddings Type:** httpRequest Purpose:** Searches Supabase DB for the top 5 most similar text chunks based on the generated embedding. Details:** Calls Supabase RPC function matchembeddings1. 📦 Aggregate Type:** aggregate Purpose:** Combines all retrieved text chunks into a single aggregated context for the LLM. 🧠 Basic LLM Chain Type:** chainLlm Purpose:** Passes the user's question + aggregated context to the LLM to generate a detailed answer. Details:** Contains prompt instructing the LLM to answer only based on context. 🤖 OpenRouter Chat Model Type:** lmChatOpenRouter Purpose:** Provides the actual AI language model that processes the prompt. Details:** Uses qwen/qwen3-8b:free model via OpenRouter and you can use any of your choice.

Automated Zoho Inventory to Supabase Product Data Pipeline

Description This powerful n8n automation template enables seamless synchronization between Zoho Inventory and Supabase—keeping your product database up to date with zero manual effort. Whether you’re running an eCommerce store, inventory dashboard, or product catalog app, this workflow ensures your data pipeline stays clean, consistent, and fully automated. What This Template Does: 🔁 Runs on a schedule to fetch inventory data from Zoho 🔓 Authenticates via OAuth using refresh token for secure API access 📦 Fetches products & variants with complete metadata 🔄 Splits each item and maps it into Supabase row-by-row 📊 Pushes rich product data, including name, SKU, unit, tags, stock levels, dimensions, and up to 3 custom attributes Fields Included in Sync: Product ID, Variant ID, Variant Name, Brand, SKU Returnability, Item Type, Unit, Attributes (1–3) Tags, Stock on Hand, UPC/EAN/ISBN, Status Reorder Level, Dimensions, Created Time, and more Requirements: Zoho Inventory API access (with Refresh Token) Supabase account & API key Target table (e.g., Fairy Frills) set up in Supabase Optional: Custom field mapping for additional use cases Perfect For: Inventory managers syncing Zoho to custom dashboards D2C brands and eCommerce platforms powered by Supabase Internal tooling teams needing a real-time product database sync Startups replacing spreadsheets with a production-grade backend

Transcribe Youtube Videos for Free with youtube-transcript.io & save to Supabase

Transcribe New YouTube Videos and Save to Supabase Who's It For? This workflow is for content creators, marketers, researchers, and anyone who needs to quickly get text transcripts from YouTube videos. If you analyze video content, repurpose it for blogs or social media, or want to make videos searchable, this template will save you hours of manual work. What It Does This template automatically monitors multiple YouTube channels for new videos. When a new video is published, it extracts the video ID, retrieves the full transcript using the youtube-transcript.io API, and saves the structured data—including the title, author, URL, and full transcript—into a Supabase table. It intelligently filters out YouTube Shorts by default and includes error handling to ensure that only successful transcriptions are processed. Requirements A Supabase account with a table ready to receive the video data. An API key from youtube-transcript.io (offers a free tier). The Channel ID for each YouTube channel you want to track. You can find this using a free online tool like TunePocket's Channel ID Finder. How to Set Up Add Channel IDs: In the "Channels To Track" node, replace the example YouTube Channel IDs with your own. The workflow uses these IDs to create RSS links and find new videos. Configure API Credentials: Find the "youtube-captions" HTTP Request node. In the credentials tab, create a new "Header Auth" credential. Name it youtube-transcript-io and paste your API key into the "Value" field. The "Name" field should be x-api-key. Connect Your Supabase Account: Navigate to the "Add to Content Queue Table" node. Create new credentials for your Supabase account using your Project URL and API key. Once connected, select your target table and map the incoming fields (title, source_url, content_snippet, etc.) to the correct columns in your table. Set Your Schedule (Optional): The workflow starts with a manual trigger. To run it automatically, replace the "When clicking ‘Execute workflow’" node with a Schedule node and set your desired interval (e.g., once a day). Activate the Workflow: Save your changes and toggle the workflow to Active in the top right corner. How to Customize Transcribe YouTube Shorts:* To include Shorts in your workflow, select the "Does url exist?"* If node and delete the second condition that checks for youtube.com/shorts. Change Your Database:* Don't use Supabase? Simply replace the "Add to Content Queue Table" node with another database or spreadsheet node, such as Google Sheets, *Airtable, or n8n's own Table.
+5

AI Agent To Chat With Files In Supabase Storage and Google Drive

Video Guide I prepared a detailed guide that illustrates the entire process of building an AI agent using Supabase and Google Drive within N8N workflows. Youtube Link Who is this for? This workflow is designed for developers, data scientists, and business users who wish to automate document management and enable AI-powered interactions over their stored files. It's especially beneficial for scenarios where users need to process, analyze, and retrieve information from uploaded documents rapidly. What problem does this workflow solve? Managing files across multiple platforms often involves tedious manual processes. This workflow facilitates automated file handling, making it easier for users to upload, parse, and interact with documents through an AI agent. It reduces redundancy and enhances the efficiency of data retrieval and management tasks. What this workflow does This workflow integrates Supabase storage with Google Drive and employs an AI agent to manage files effectively. The agent can: Upload files to Supabase storage and activate processes based on file changes in Google Drive. Retrieve and parse documents, converting them into a structured format for easy querying. Utilize an AI agent to answer user queries based on saved document data. Data Collection: The workflow initially gathers files from Supabase storage, ensuring no duplicates are processed in the 'files' table. File Handling: It processes files to be parsed based on their type, leveraging LlamaParse for effective data transformation. Google Drive Integration: The workflow monitors a designated Google Drive folder to upload files automatically and refresh document records in the database with new data. AI Interaction: A webhook is established to enable the AI agent to converse with users, facilitating queries and leveraging stored document knowledge. Setup Supabase Storage Setup: Create a private bucket in Supabase storage, modifying the default name in the URL. Upload your files using the provided upload options. Database Configuration: Establish the 'file' and 'document' tables in Supabase with the necessary fields. Execute any required SQL queries for enabling vector matching features. N8N Workflow Logic: Start with a manual trigger for the initial workflow segment or consider alternative triggers like webhooks. Replace all relevant credentials across nodes with your own to ensure seamless operation. File Processing and Google Drive Monitoring: Set up file processing to take care of downloading and parsing files based on their types. Create triggers to monitor the designated Google Drive folder for file uploads and updates. Integrate AI Agent: Configure the webhook for the AI agent to accept chat inputs while maintaining session context for enhanced user interactions. Utilize PostgreSQL to store user interactions and manage conversation states effectively. Testing and Adjustments: Once everything is set up, run tests with the AI agent to validate its responses based on the documents in your database. Fine-tune the workflow and AI model as needed to achieve desired performance.
+7

Automated US Stock Portfolio Analysis with Telegram, Perplexity AI & PDF Reports

System Architecture Two integrated N8N workflows providing automated US stock portfolio management through Telegram: FLOW 1: Conversational Portfolio Manager Telegram bot for interactive portfolio management PDF upload & analysis via LlamaIndex Cloud API Natural language portfolio updates via GPT-4.1-mini Real-time user registration and data management FLOW 2: Automated Weekly Reports Schedule-triggered weekly analysis (every 7 days) Perplexity AI sonar-deep-research for market analysis Professional PDF report generation via PDFco Automatic Telegram delivery to all registered users

Build your own HTTP Request and Supabase integration

Create custom HTTP Request and Supabase workflows by choosing triggers and actions. Nodes come with global operations and settings, as well as app-specific parameters that can be configured. You can also use the HTTP Request node to query data from any app or service with a REST API.

Supabase supported actions

Create
Create a new row
Delete
Delete a row
Get
Get a row
Get Many
Get many rows
Update
Update a row
Use case

Save engineering resources

Reduce time spent on customer integrations, engineer faster POCs, keep your customer-specific functionality separate from product all without having to code.

Learn more

FAQs

  • Can HTTP Request connect with Supabase?

  • Can I use HTTP Request’s API with n8n?

  • Can I use Supabase’s API with n8n?

  • Is n8n secure for integrating HTTP Request and Supabase?

  • How to get started with HTTP Request and Supabase integration in n8n.io?

Need help setting up your HTTP Request and Supabase integration?

Discover our latest community's recommendations and join the discussions about HTTP Request and Supabase integration.
Moiz Contractor
theo
Jon
Dan Burykin
Tony

Looking to integrate HTTP Request and Supabase in your company?

Over 3000 companies switch to n8n every single week

Why use n8n to integrate HTTP Request with Supabase

Build complex workflows, really fast

Build complex workflows, really fast

Handle branching, merging and iteration easily.
Pause your workflow to wait for external events.

Code when you need it, UI when you don't

Simple debugging

Your data is displayed alongside your settings, making edge cases easy to track down.

Use templates to get started fast

Use 1000+ workflow templates available from our core team and our community.

Reuse your work

Copy and paste, easily import and export workflows.

Implement complex processes faster with n8n

red iconyellow iconred iconyellow icon