Back to Integrations
integrationGmail node
integrationSupabase node

Gmail and Supabase integration

Save yourself the work of writing custom integrations for Gmail and Supabase and use n8n instead. Build adaptable and scalable Communication, HITL, and Data & Storage workflows that work with your technology stack. All within a building experience you will love.

How to connect Gmail and Supabase

  • Step 1: Create a new workflow
  • Step 2: Add and configure nodes
  • Step 3: Connect
  • Step 4: Customize and extend your integration
  • Step 5: Test and activate your workflow

Step 1: Create a new workflow and add the first step

In n8n, click the "Add workflow" button in the Workflows tab to create a new workflow. Add the starting point – a trigger on when your workflow should run: an app event, a schedule, a webhook call, another workflow, an AI chat, or a manual trigger. Sometimes, the HTTP Request node might already serve as your starting point.

Gmail and Supabase integration: Create a new workflow and add the first step

Step 2: Add and configure Gmail and Supabase nodes

You can find Gmail and Supabase in the nodes panel. Drag them onto your workflow canvas, selecting their actions. Click each node, choose a credential, and authenticate to grant n8n access. Configure Gmail and Supabase nodes one by one: input data on the left, parameters in the middle, and output data on the right.

Gmail and Supabase integration: Add and configure Gmail and Supabase nodes

Step 3: Connect Gmail and Supabase

A connection establishes a link between Gmail and Supabase (or vice versa) to route data through the workflow. Data flows from the output of one node to the input of another. You can have single or multiple connections for each node.

Gmail and Supabase integration: Connect Gmail and Supabase

Step 4: Customize and extend your Gmail and Supabase integration

Use n8n's core nodes such as If, Split Out, Merge, and others to transform and manipulate data. Write custom JavaScript or Python in the Code node and run it as a step in your workflow. Connect Gmail and Supabase with any of n8n’s 1000+ integrations, and incorporate advanced AI logic into your workflows.

Gmail and Supabase integration: Customize and extend your Gmail and Supabase integration

Step 5: Test and activate your Gmail and Supabase workflow

Save and run the workflow to see if everything works as expected. Based on your configuration, data should flow from Gmail to Supabase or vice versa. Easily debug your workflow: you can check past executions to isolate and fix the mistake. Once you've tested everything, make sure to save your workflow and activate it.

Gmail and Supabase integration: Test and activate your Gmail and Supabase workflow

Smart email assistant: automate customer support with AI & Supabase

Intelligent Email Support System with Vector Database

Overview

This n8n workflow automates email support using AI and vector database technology to provide smart, context-aware responses. It seamlessly integrates email automation and document management, ensuring efficient customer support.

📌 System Components

✉️ Email Support System

Email Monitoring & Classification

Gmail trigger node monitoring inbox
AI-powered email classification
Intelligent routing (support vs non-support inquiries)

AI Response Generation

LangChain agent for response automation
OpenAI integration for NLP-driven replies
Vector-based knowledge retrieval
Automated draft creation in Gmail

Vector Database System

Supabase vector store for document management
OpenAI embeddings for vector conversion
Fast and efficient similarity search

📂 Document Management System

Google Drive Integration

Monitors specific folders for new/updated files
Automatic document processing
Supports various file formats

Document Processing Pipeline

Auto file download & text extraction
Smart text chunking for better indexing
Embedding generation via OpenAI
Storage in Supabase vector database

🔄 Workflow Processes

📧 Email Support Flow

Monitor Gmail inbox for new emails
AI classification of incoming messages
Route support emails to AI response generator
Perform vector similarity search for knowledge retrieval
Generate personalized AI-driven response
Create email drafts in Gmail

📁 Document Management Flow

Monitor Google Drive for new/updated files
Auto-download and process documents
Clean up outdated vector entries for updated files
Extract and split document text efficiently
Generate OpenAI embeddings
Store processed data in Supabase vector DB

⚙️ Setup Instructions

1️⃣ Prerequisites

Supabase** account & project
OpenAI API key**
Gmail account** with OAuth2 setup
Google Drive API** access
n8n installation**

2️⃣ Supabase Database Setup

-- Create the vector extension
create extension if not exists vector;

-- Create the documents table
create table documents (
id bigserial primary key,
content text,
metadata jsonb,
embedding vector(1536)
);

-- Create an index for similarity search
create index on documents using ivfflat (embedding vector_cosine_ops)
with (lists = 100);

3️⃣ Google Drive Setup

Create & configure two monitored folders:
RAG folder for new documents
documents
Assign correct folder permissions
Add folder IDs to the workflow

4️⃣ Document Processing Configuration

Set up triggers for file creation and file updates
Configure text extraction:
Define chunk size & overlap settings
Set document metadata processing

🔍 Maintenance & Optimization

📌 Regular Tasks

Monitor system performance
Update the knowledge base regularly
Review AI response quality
Optimize vector search parameters
Clean up outdated document embeddings

✅ Best Practices

Document Organization

Maintain structured folders & naming conventions
Keep knowledge base content updated

System Optimization

Track AI classification accuracy
Tune response times & chunk sizes
Perform regular database maintenance

🛠️ Troubleshooting

Email Issues

Verify Gmail API credentials
Check AI service uptime
Monitor classification performance

Document Processing Issues

Ensure correct file permissions
Validate extraction & embedding processes
Debug vector database insertions

Nodes used in this workflow

Popular Gmail and Supabase workflows

Reddit Monitoring with AI Sentiment Analysis and Growth Insights Dashboard

This template gives you a complete, automated system for monitoring Reddit and extracting growth insights. It tracks discussions across target subreddits, surfaces what users love, dislike, want changed, and highlights how they compare you to competitors. Paired with the free WeWeb UI template, it prioritizes engagement and organizes everything into a clean, easy-to-use dashboard. So every team gets the insights they need: Leadership** gains clarity on industry trends and emerging shifts Product** can adjust roadmaps and prioritize features or integrations Marketing** gets content angles, competitive messaging, and SEO topics Sales** receives objection insights straight from real conversations Support** spots early patterns in user challenges 🙌 Who this is for Perfect for product teams, founders, and growth marketers who want to build and scale Reddit as a channel without spending hours manually scanning threads. 💫 What Makes This Different Eliminates manual scanning:** Automatically pull product and competitor mentions using F5Bot for free, without the high cost of traditional monitoring tools. Captures full conversations:** Track not just posts, but the entire comment chain where real insights, objections, and frustrations actually surface. AI-powered prioritization:** Every mention is classified by sentiment and topic so you know what to prioritize and why. Cross-team intelligence:** Highlights product insights, competitor signals, sales objections, user frustrations, and industry trends, helping product, marketing, sales, support, and leadership make more customer-centric decisions. ⚙️ How the Workflow Works A cron job runs every hour and scans your Gmail inbox for new F5Bot alert emails. When an alert is found, the workflow extracts all mention data from the email. An AI node processes each mention to: categorize it by topic tag sentiment All data is stored in Supabase. The data is displayed in a WeWeb dashboard where users can browse mentions. If a user wants deeper context, they click “AI Summary.” This triggers a webhook in n8n, which pulls the main Reddit post and its entire comment chain. The AI node summarizes the full thread and highlights: the core discussion competitor comparisons what users like or dislike industry-level signals The workflow returns a clean, actionable summary back to the WeWeb UI. 🧪 Requirements You don’t need any heavy infrastructure. To get started, you’ll need: F5Bot account (free)** - to track Reddit mentions by keywords and trigger email alerts Gmail integration** - so the workflow can parse emails from F5Bot OpenAI API key** - for AI-powered categorization and summarization Supabase project (free)** - to store all mention data WeWeb account (free)** - connects your n8n workflow to a clean, user-friendly dashboard for viewing insights Here's a detailed setup guide. 🔧 Want to Go Further? This setup is beginner-friendly, but you can extend it with: Blog topic generation Full blog post generation Social media posts Competitor benchmarking reports Weekly or monthly email digests Slack alerts for high-signal mentions

Extract Email Tasks with Gmail, ChatGPT-4o and Supabase

📩 Gmail → GPT → Supabase | Task Extractor This n8n workflow automates the extraction of actionable tasks from unread Gmail messages using OpenAI's GPT API, stores the resulting task metadata in Supabase, and avoids re-processing previously handled emails. ✅ What It Does Triggers on a schedule to check for unread emails in your Gmail inbox. Loops through each email individually using SplitInBatches. Checks Supabase to see if the email has already been processed. If it's a new email: Formats the email content into a structured GPT prompt Calls ChatGPT-4o to extract structured task data Inserts the result into your emails table in Supabase 🧰 Prerequisites Before using this workflow, you must have: An active n8n Cloud or self-hosted instance A connected Gmail account with OAuth credentials in n8n A Supabase project with an emails table and: ALTER TABLE emails ADD CONSTRAINT unique_email_id UNIQUE (email_id); An OpenAI API key with access to GPT-4o or GPT-3.5-turbo 🔐 Required Credentials | Name | Type | Description | |-----------------|------------|-----------------------------------| | Gmail OAuth | Gmail | To pull unread messages | | OpenAI API Key | OpenAI | To generate task summaries | | Supabase API | HTTP | For inserting rows via REST API | 🔁 Environment Variables or Replacements Supabase_TaskManagement_URI → e.g., https://your-project.supabase.co Supabase_TaskManagement_ANON_KEY → Your Supabase anon key These are used in the HTTP request to Supabase. ⏰ Scheduling / Trigger Triggered using a Schedule node Default: every X minutes (adjust to your preference) Uses a Gmail API filter: unread emails with label = INBOX 🧠 Intended Use Case > Designed for productivity-minded professionals who want to extract, summarize, and store actionable tasks from incoming email — without processing the same email twice or wasting GPT API credits. This is part of a larger system integrating GPT, calendar scheduling, and optional task platforms (like ClickUp). 📦 Output (Stored in Supabase) Each processed email includes: email_id subject sender received_at body (email snippet) gpt_summary (structured task) requires_deep_work (from GPT logic) deleted (initially false)
+6

Interactive Knowledge Base Chat with Supabase RAG using AI 📚💬

Google Drive File Ingestion to Supabase for Knowledge Base 📂💾 Overview 🌟 This n8n workflow automates the process of ingesting files from Google Drive into a Supabase database, preparing them for a knowledge base system. It supports text-based files (PDF, DOCX, TXT, etc.) and tabular data (XLSX, CSV, Google Sheets), extracting content, generating embeddings, and storing data in structured tables. This is a foundational workflow for building a company knowledge base that can be queried via a chat interface (e.g., using a RAG workflow). 🚀 Problem Solved 🎯 Manually managing a knowledge base with files from Google Drive is time-consuming and error-prone. This workflow solves that by: Automatically ingesting files from Google Drive as they are created or updated. Extracting content** from various file types (text and tabular). Generating embeddings for text-based files to enable vector search. Storing data in Supabase for efficient retrieval. Handling duplicates and errors to ensure data consistency. Target Audience: Knowledge Managers**: Build a centralized knowledge base from company files. Data Teams**: Automate the ingestion of spreadsheets and documents. Developers**: Integrate with other workflows (e.g., RAG for querying the knowledge base). Workflow Description 🔍 This workflow listens for new or updated files in Google Drive, processes them based on their type, and stores the extracted data in Supabase tables for later retrieval. Here’s how it works: File Detection: Triggers when a file is created or updated in Google Drive. File Processing: Loops through each file, extracts metadata, and validates the file type. Duplicate Check: Ensures the file hasn’t been processed before. Content Extraction: Text-based Files: Downloads the file, extracts text, splits it into chunks, generates embeddings, and stores the chunks in Supabase. Tabular Files: Extracts data from spreadsheets and stores it as rows in Supabase. Metadata Storage: Stores file metadata and basic info in Supabase tables. Error Handling: Logs errors for unsupported formats or duplicates. Nodes Breakdown 🛠️ Detect New File 🔔 Type**: Google Drive Trigger Purpose**: Triggers the workflow when a new file is created in Google Drive. Configuration**: Credential: Google Drive OAuth2 Event: File Created Customization**: Specify a folder to monitor specific directories. Detect Updated File 🔔 Type**: Google Drive Trigger Purpose**: Triggers the workflow when a file is updated in Google Drive. Configuration**: Credential: Google Drive OAuth2 Event: File Updated Customization**: Currently disconnected; reconnect if updates need to be processed. Process Each File 🔄 Type**: Loop Over Items Purpose**: Processes each file individually from the Google Drive trigger. Configuration**: Input: {{ $json.files }} Customization**: Adjust the batch size if processing multiple files at once. Extract File Metadata 🆔 Type**: Set Purpose**: Extracts metadata like file_id, file_name, mime_type, and web_view_link. Configuration**: Fields: file_id: {{ $json.id }} file_name: {{ $json.name }} mime_type: {{ $json.mimeType }} web_view_link: {{ $json.webViewLink }} Customization**: Add more metadata fields if needed (e.g., size, createdTime). Check File Type ✅ Type**: IF Purpose**: Validates the file type by checking the MIME type. Configuration**: Condition: mime_type contains supported types (e.g., application/pdf, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet). Customization**: Add more supported MIME types as needed. Find Duplicates 🔍 Type**: Supabase Purpose**: Checks if the file has already been processed by querying knowledge_base. Configuration**: Operation: Select Table: knowledge_base Filter: file_id = {{ $node['Extract File Metadata'].json.file_id }} Customization**: Add additional duplicate checks (e.g., by file name). Handle Duplicates 🔄 Type**: IF Purpose**: Routes the workflow based on whether a duplicate is found. Configuration**: Condition: {{ $node['Find Duplicates'].json.length > 0 }} Customization**: Add notifications for duplicates if desired. Remove Old Text Data 🗑️ Type**: Supabase Purpose**: Deletes old text data from documents if the file is a duplicate. Configuration**: Operation: Delete Table: documents Filter: metadata->>'file_id' = {{ $node['Extract File Metadata'].json.file_id }} Customization**: Add logging before deletion. Remove Old Data 🗑️ Type**: Supabase Purpose**: Deletes old tabular data from document_rows if the file is a duplicate. Configuration**: Operation: Delete Table: document_rows Filter: dataset_id = {{ $node['Extract File Metadata'].json.file_id }} Customization**: Add logging before deletion. Route by File Type 🔀 Type**: Switch Purpose**: Routes the workflow based on the file’s MIME type (text-based or tabular). Configuration**: Rules: Based on mime_type (e.g., application/pdf for text, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet for tabular). Customization**: Add more routes for additional file types. Download File Content 📥 Type**: Google Drive Purpose**: Downloads the file content for text-based files. Configuration**: Credential: Google Drive OAuth2 File ID: {{ $node['Extract File Metadata'].json.file_id }} Customization**: Add error handling for download failures. Extract PDF Text 📜 Type**: Extract from File (PDF) Purpose**: Extracts text from PDF files. Configuration**: File Content: {{ $node['Download File Content'].binary.data }} Customization**: Adjust extraction settings for better accuracy. Extract DOCX Text 📜 Type**: Extract from File (DOCX) Purpose**: Extracts text from DOCX files. Configuration**: File Content: {{ $node['Download File Content'].binary.data }} Customization**: Add support for other text formats (e.g., TXT, RTF). Extract XLSX Data 📊 Type**: Extract from File (XLSX) Purpose**: Extracts tabular data from XLSX files. Configuration**: File ID: {{ $node['Extract File Metadata'].json.file_id }} Customization**: Add support for CSV or Google Sheets. Split Text into Chunks ✂️ Type**: Text Splitter Purpose**: Splits extracted text into manageable chunks for embedding. Configuration**: Chunk Size: 1000 Chunk Overlap: 200 Customization**: Adjust chunk size and overlap based on document length. Generate Text Embeddings 🌐 Type**: OpenAI Purpose**: Generates embeddings for text chunks using OpenAI. Configuration**: Credential: OpenAI API key Operation: Embedding Model: text-embedding-ada-002 Customization**: Switch to a different embedding model if needed. Store Text in Supabase 💾 Type**: Supabase Vector Store Purpose**: Stores text chunks and embeddings in the documents table. Configuration**: Credential: Supabase credentials Operation: Insert Documents Table Name: documents Customization**: Add metadata fields to store additional context. Store Tabular Data 💾 Type**: Supabase Purpose**: Stores tabular data in the document_rows table. Configuration**: Operation: Insert Table: document_rows Columns: dataset_id, row_data Customization**: Add validation for tabular data structure. Store File Metadata 📋 Type**: Supabase Purpose**: Stores file metadata in the document_metadata table. Configuration**: Operation: Insert Table: document_metadata Columns: file_id, file_name, file_type, file_url Customization**: Add more metadata fields as needed. Record in Knowledge Base 📚 Type**: Supabase Purpose**: Stores basic file info in the knowledge_base table. Configuration**: Operation: Insert Table: knowledge_base Columns: file_id, file_name, file_type, file_url, upload_date Customization**: Add indexes for faster lookups. Log File Errors ⚠️ Type**: Supabase Purpose**: Logs errors for unsupported file types. Configuration**: Operation: Insert Table: error_log Columns: error_type, error_message Customization**: Add notifications for errors. Log Duplicate Errors ⚠️ Type**: Supabase Purpose**: Logs errors for duplicate files. Configuration**: Operation: Insert Table: error_log Columns: error_type, error_message Customization**: Add notifications for duplicates. Interactive Knowledge Base Chat with Supabase RAG using GPT-4o-mini 📚💬 Introduction 🌟 This n8n workflow creates an interactive chat interface that allows users to query a company knowledge base using Retrieval-Augmented Generation (RAG). It retrieves relevant information from text documents and tabular data stored in Supabase, then generates natural language responses using OpenAI’s GPT-4o-mini model. Designed for teams managing internal knowledge, this workflow enables users to ask questions like “What’s the remote work policy?” or “Show me the latest budget data” and receive accurate, context-aware responses in a conversational format. 🚀 Problem Statement 🎯 Managing a company knowledge base can be a daunting task—employees often struggle to find specific information buried in documents or spreadsheets, leading to wasted time and inefficiencies. Traditional search methods may not understand natural language queries or provide contextually relevant results. This workflow solves these issues by: Offering a chat-based interface for natural language queries, making it easy for users to ask questions in their own words. Leveraging RAG to retrieve relevant text and tabular data from Supabase, ensuring responses are accurate and context-aware. Supporting diverse file types, including text-based files (e.g., PDFs, DOCX) and tabular data (e.g., XLSX, CSV), for comprehensive knowledge access. Maintaining conversation history to provide context during interactions, improving the user experience. Target Audience 👥 This workflow is ideal for: HR Teams**: Quickly access company policies, employee handbooks, or benefits documents. Finance Teams**: Retrieve budget data, expense reports, or financial summaries from spreadsheets. Knowledge Managers**: Build a centralized assistant for internal documentation, streamlining information access. Developers**: Extend the workflow with additional tools or integrations for custom use cases. Workflow Description 🔍 This workflow consists of a chat interface powered by n8n’s Chat Trigger node, an AI Agent node for RAG, and several tools to retrieve data from Supabase. Here’s how it works step-by-step: User Initiates a Chat: The user interacts with a chat interface, sending queries like “Summarize our remote work policy” or “Show budget data for Q1 2025.” Query Processing with RAG: The AI Agent processes the query using RAG, retrieving relevant data from Supabase tables and generating a response with OpenAI’s GPT-4o-mini model. Data Retrieval and Response Generation: The workflow uses multiple tools to fetch data: Retrieves text chunks from the documents table using vector search. Fetches tabular data from the document_rows table based on file IDs. Extracts full document text or lists available files as needed. Generates a natural language response combining the retrieved data. Conversation History Management: Stores the conversation history in Supabase to maintain context for follow-up questions. Response Delivery: Formats and sends the response back to the chat interface for the user to view. Nodes Breakdown 🛠️ Start Chat Interface 💬 Type**: Chat Trigger Purpose**: Provides the interactive chat interface for users to input queries and receive responses. Configuration**: Chat Title: Company Knowledge Base Assistant Chat Subtitle: Ask me anything about company documents! Welcome Message: Hello! I’m your Company Knowledge Base Assistant. How can I help you today? Suggestions: What is the company policy on remote work?, Show me the latest budget data., List all policy documents. Output Chat Session ID: true Output User Message: true Customization**: Update the title and welcome message to align with your company branding (e.g., HR Knowledge Assistant). Add more suggestions relevant to your use case (e.g., What are the company benefits?). Process Query with RAG 🧠 Type**: AI Agent Purpose**: Orchestrates the RAG process by retrieving relevant data using tools and generating responses with OpenAI’s GPT-4o-mini. Configuration**: Credential: OpenAI API key Model: gpt-4o-mini System Prompt: You are a helpful assistant for a company knowledge base. Use the provided tools to retrieve relevant information from documents and tabular data. If the query involves tabular data, format it clearly in your response. If no relevant data is found, respond with "I couldn’t find any relevant information. Can you provide more details?" Input Field: {{ $node['Start Chat Interface'].json.message }} Customization**: Switch to a different model (e.g., gpt-3.5-turbo) to adjust cost or performance. Modify the system prompt to change the tone (e.g., more formal for HR use cases). Retrieve Text Chunks 📄 Type**: Supabase Vector Store (Tool) Purpose**: Retrieves relevant text chunks from the documents table using vector search. Configuration**: Credential: Supabase credentials Operation Mode: Retrieve Documents (As Tool for AI Agent) Table Name: documents Embedding Field: embedding Content Field: content_text Metadata Field: metadata Embedding Model: OpenAI text-embedding-ada-002 Top K: 10 Customization**: Adjust Top K to retrieve more or fewer results (e.g., 5 for faster responses). Ensure the match_documents function (see prerequisites) is defined in Supabase. Fetch Tabular Data 📊 Type**: Supabase (Tool, Execute Query) Purpose**: Retrieves tabular data from the document_rows table based on a file ID. Configuration**: Credential: Supabase credentials Operation: Execute Query Query: SELECT row_data FROM document_rows WHERE dataset_id = $1 LIMIT 10 Tool Description: Run a SQL query - use this to query from the document_rows table once you know the file ID you are querying. dataset_id is the file_id and you are always using the row_data for filtering, which is a jsonb field that has all the keys from the file schema given in the document_metadata table. Customization**: Modify the query to filter specific columns or add conditions (e.g., WHERE dataset_id = $1 AND row_data->>'year' = '2025'). Increase the LIMIT for larger datasets. Extract Full Document Text 📜 Type**: Supabase (Tool, Execute Query) Purpose**: Fetches the full text of a document by concatenating all text chunks for a given file_id. Configuration**: Credential: Supabase credentials Operation: Execute Query Query: SELECT string_agg(content_text, ' ') as document_text FROM documents WHERE metadata->>'file_id' = $1 GROUP BY metadata->>'file_id' Tool Description: Given file id fetch the text from the documents Customization**: Add filters to the query if needed (e.g., limit to specific metadata fields). List Available Files 📋 Type**: Supabase (Tool, Select) Purpose**: Lists all files in the knowledge base from the document_metadata table. Configuration**: Credential: Supabase credentials Operation: Select Schema: public Table: document_metadata Tool Description: Use this tool to fetch all documents including the table schema if the file is csv, excel or xlsx Customization**: Add filters to list specific file types (e.g., WHERE file_type = 'application/pdf'). Modify the columns selected to include additional metadata (e.g., file_size). Manage Chat History 💾 Type**: Postgres Chat Memory (Tool) Purpose**: Stores and retrieves conversation history to maintain context. Configuration**: Credential: Supabase credentials (Postgres-compatible) Table Name: n8n_chat_history Session ID Field: session_id Session ID Value: {{ $node['Start Chat Interface'].json.sessionId }} Message Field: message Sender Field: sender Timestamp Field: timestamp Context Window Length: 5 Customization**: Increase the context window length for longer conversations (e.g., 10 messages). Add indexes on session_id and timestamp in Supabase for better performance. Format and Send Response 📤 Type**: Set Purpose**: Formats the AI Agent’s response and sends it back to the chat interface. Configuration**: Fields: response: {{ $node['Process Query with RAG'].json.output }} Customization**: Add additional formatting to the response if needed (e.g., prepend with a timestamp or apply markdown formatting). Setup Instructions 🛠️ Prerequisites 📋 n8n Setup: Ensure you’re using n8n version 1.0 or higher. Enable the AI features in n8n settings. Supabase: Create a Supabase project and set up the following tables: documents: id (uuid), content_text (text), embedding (vector(1536)), metadata (jsonb) document_rows: id (uuid), dataset_id (varchar), row_data (jsonb) document_metadata: file_id (varchar), file_name (varchar), file_type (varchar), file_url (text) knowledge_base: id (serial), file_id (varchar), file_name (varchar), file_type (varchar), file_url (text), upload_date (timestamp) n8n_chat_history: id (serial), session_id (varchar), message (text), sender (varchar), timestamp (timestamp) Add the match_documents function to Supabase to enable vector search: CREATE OR REPLACE FUNCTION match_documents ( query_embedding vector(1536), match_count int DEFAULT 5, filter jsonb DEFAULT '{}' ) RETURNS TABLE ( id uuid, content_text text, metadata jsonb, similarity float ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT documents.id, documents.content_text, documents.metadata, 1 - (documents.embedding <=> query_embedding) as similarity FROM documents WHERE documents.metadata @> filter ORDER BY similarity DESC LIMIT match_count; END; $$;

Resume Data Extraction and Storage in Supabase from Email Attachments

Description What Problem Does This Solve? 🛠️ This workflow automates the process of extracting key information from resumes received as email attachments and storing that data in a structured format within a Supabase database. It eliminates the manual effort of reviewing each resume, identifying relevant details, and entering them into a database. This streamlines the hiring process, making it faster and more efficient for recruiters and HR professionals. Target audience: Recruiters, HR departments, and talent acquisition teams. What Does It Do? 🌟 Monitors a designated email inbox for new messages with resume attachments. Extracts key information such as name, contact details, education, work experience, and skills from the attached resumes. Cleans and formats the extracted data. Stores the processed data securely in a Supabase database. Key Features 📋 Automatic email monitoring for resume attachments. Intelligent data extraction from various resume formats (e.g., PDF, DOC, DOCX). Customizable data fields to capture specific information. Seamless integration with Supabase for data storage. Uses OpenRouter to streamline API key management for services such as AI-powered parsing. Setup Instructions Prerequisites ⚙️ n8n Instance**: Self-hosted or cloud instance of n8n. Email Account**: Gmail account with Gmail API access for receiving resumes. Supabase Account**: A Supabase project with a database/table ready to store extracted resume data. You'll need the Supabase URL and API key. OpenRouter Account**: For managing AI model API keys centrally when using LLM-based resume parsing. Installation Steps 📦 Import the Workflow: Copy the exported workflow JSON. Import it into your n8n instance via “Import from File” or “Import from URL”. Configure Credentials: In n8n > Credentials, add credentials for: Email account (Gmail API): Provide Client ID and Client Secret from the Google Cloud Platform. Supabase: Provide the Supabase URL and the anon public API key. OpenRouter (Optional): Add your OpenRouter API key for use with any AI-powered resume parsing nodes. Assign these credentials to their respective nodes: Gmail Trigger → Email credentials. Supabase Insert → Supabase credentials. AI Parsing Node → OpenRouter credentials. Set Up Supabase Table: Create a table in Supabase with columns such as: name, email, phone, education, experience, skills, received_date, etc. Make sure the field names align with the structure used in your workflow. Customize Nodes: Parsing Node(s):* Modify the workflow to use an OpenAI model directly for field extraction, replacing the Basic LLM Chain* node that utilizes OpenRouter. Test the Workflow: Send a test email with a resume attachment. Check n8n's execution log to confirm the workflow triggered, parsed the data, and inserted it into Supabase. Verify data integrity in your Supabase table. How It Works High-Level Workflow 🔍 Email Monitoring: Triggered when a new email with an attachment is received (via Gmail API). Attachment Check: Verifies the email contains at least one attachment. Prepare Data: Extracts the attachment and prepares it for analysis. Data Extraction: Uses OpenRouter-powered LLM (if configured) to extract structured information from the resume. Data Storage: The structured information is saved into the Supabase database. Node Names and Actions (Example) Gmail Trigger:** Triggers when a new email is received. IF**: Checks whether the received email includes any attachments. Get Attachments:** Retrieves attachments from the triggering email. Prepare Data:** Prepares the attachment content for processing. Basic LLM Chain:** Uses an AI model via OpenRouter to extract relevant resume data and returns it as structured fields. Supabase-Insert:** Inserts the structured resume data into your Supabase database.

Raw Materials Inventory Management with Google Sheets, Supabase and Approvals

Automated Raw Materials Inventory Management with Google Sheets, Supabase, and Gmail using n8n Webhooks Description What Problem Does This Solve? 🛠️ This workflow automates raw materials inventory management for businesses, eliminating manual stock updates, delayed material issue approvals, and missed low stock alerts. It ensures real-time stock tracking, streamlined approvals, and timely notifications. Target audience: Small to medium-sized businesses, inventory managers, and n8n users familiar with Google Sheets, Supabase, and Gmail integrations. What Does It Do? 🌟 Receives raw material data and issue requests via form submissions. Updates stock levels in Google Sheets and Supabase. Manages approvals for material issue requests with email notifications. Detects low stock levels and sends alerts via Gmail. Maintains data consistency across Google Sheets and Supabase. Key Features Real-time stock updates from form submissions. Automated approval process for material issuance. Low stock detection with Gmail notifications. Dual storage in Google Sheets and Supabase for redundancy. Error handling for robust data validation. Setup Instructions Prerequisites n8n Instance**: Self-hosted or cloud n8n instance. API Credentials**: Google Sheets API: Credentials from Google Cloud Console with Sheets scope, stored in n8n credentials. Supabase API: API key and URL from Supabase project, stored in n8n credentials (do not hardcode in nodes). Gmail API: Credentials from Google Cloud Console with Gmail scope. Forms**: A form (e.g., Google Form) to submit raw material receipts and issue requests, configured to send data to n8n webhooks. Installation Steps Import the Workflow: Copy the workflow JSON from the “Template Code” section (to be provided). Import it into n8n via “Import from File” or “Import from URL”. Configure Credentials: Add API credentials in n8n’s Credentials section for Google Sheets, Supabase, and Gmail. Assign credentials to respective nodes. For example: In the Append Raw Materials node, use Google Sheets credentials: {{ $credentials.GoogleSheets }}. In the Current Stock Update node, use Supabase credentials: {{ $credentials.Supabase }}. In the Send Low Stock Email Alert node, use Gmail credentials. Set Up Nodes: Webhook Nodes (Receive Raw Materials Webhook, Receive Material Issue Webhook): Configure webhook URLs and link them to your form submissions. Approval Email (Send Approval Request): Customize the HTML email template if needed. Low Stock Alerts (Send Low Stock Email Alert, Send Low Stock Email After Issue): Configure recipient email addresses. Test the Workflow: Submit a test form for raw material receipt and verify stock updates in Google Sheets/Supabase. Submit a material issue request, approve/reject it, and confirm stock updates and notifications. How It Works High-Level Steps Receive Raw Materials: Processes form submissions for raw material receipts. Update Stock: Updates stock levels in Google Sheets and Supabase. Handle Issue Requests: Processes material issue requests via forms. Manage Approvals: Sends approval requests and processes decisions. Monitor Stock Levels: Detects low stock and sends Gmail alerts. Detailed Descriptions Detailed node descriptions are available in the sticky notes within the workflow screenshot (to be provided). Below is a summary of key actions. Node Names and Actions Raw Materials Receiving and Stock Update Receive Raw Materials Webhook**: Receives raw material data from a form submission. Standardize Raw Material Data**: Maps form data into a consistent format. Calculate Total Price**: Computes Total Price (Quantity Received * Unit Price). Append Raw Materials**: Records receipt in Google Sheets. Check Quantity Received Validity**: Ensures Quantity Received is valid. Lookup Existing Stock**: Retrieves current stock for the Product ID. Check If Product Exists**: Branches based on Product ID existence. Calculate Updated Current Stock**: Adds Quantity Received to stock (True branch). Update Current Stock**: Updates stock in Google Sheets (True branch). Retrieve Updated Stock for Check**: Retrieves updated stock for low stock check. Detect Low Stock Level**: Flags if stock is below minimum. Trigger Low Stock Alert**: Triggers email if stock is low. Send Low Stock Email Alert**: Sends low stock alert via Gmail. Add New Product to Stock**: Adds new product to stock (False branch). Current Stock Update**: Updates Supabase Current Stock table. New Row Current Stock**: Inserts new product into Supabase. Search Current Stock**: Retrieves Supabase stock records. New Record Raw**: Inserts raw material record into Supabase. Format Response**: Removes duplicates from Supabase response. Combine Stock Update Branches**: Merges branches for existing/new products. Material Issue Request and Approval Receive Material Issue Webhook**: Receives issue request from a form submission. Standardize Data**: Normalizes request data and adds Approval Link. Validate Issue Request Data**: Ensures Quantity Requested is valid. Verify Requested Quantity**: Validates Product ID and Submission ID. Append Material Request**: Records request in Google Sheets. Check Available Stock for Issue**: Retrieves current stock for the request. Prepare Approval**: Checks stock sufficiency for the request. Send Approval Request**: Emails approver with Approve/Reject options. Receive Approval Response**: Captures approver’s decision via webhook. Format Approval Response**: Processes approval data with Approval Date. Verify Approval Data**: Validates the approval response. Retrieve Issue Request Details**: Retrieves original request from Google Sheets. Process Approval Decision**: Branches based on approval action. Get Stock for Issue Update**: Retrieves stock before update (Approved). Deduct Issued Stock**: Reduces stock by Approved Quantity (Approved). Update Stock After Issue**: Updates stock in Google Sheets (Approved). Retrieve Stock After Issue**: Retrieves updated stock for low stock check. Detect Low Stock After Issue**: Flags low stock after issuance. Trigger Low Stock Alert After Issue**: Triggers email if stock is low. Send Low Stock Email After Issue**: Sends low stock alert via Gmail. Update Issue Request Status**: Updates request status (Approved/Rejected). Combine Stock Lookup Results**: Merges stock lookup branches. Create Record Issue**: Inserts issue request into Supabase. Search Stock by Product ID**: Retrieves Supabase stock records. Issues Table Update**: Updates Supabase Materials Issued table. Update Current Stock**: Updates Supabase stock after issuance. Combine Issue Lookup Branches**: Merges issue lookup branches. Search Issue by Submission ID**: Retrieves Supabase issue records. Customization Tips Expand Storage Options **: Add nodes to store data in other databases (e.g., Airtable) alongside Google Sheets and Supabase. Modify Approval Email **: Update the Send Approval Request node to customize the HTML email template (e.g., adjust styling or add branding). Alternative Notifications **: Add nodes to send low stock alerts via other platforms (e.g., Slack or Telegram). Adjust Low Stock Threshold **: Modify the Detect Low Stock Level node to change the Minimum Stock Level (default: 50).!
+2

Reddit Lead Finder: Automated Prospecting with GPT-4, Supabase and Gmail Alerts

This workflow monitors targeted subreddits for potential sales leads using Reddit’s API, AI content analysis, Supabase, and Google Sheets. It is built specifically to discover posts from Reddit users who may benefit from a particular product or service. It can be easily customized for any market. 🔍 Features Targeted Subreddit Monitoring:** Searches multiple niche subreddits like smallbusiness, startup, sweatystartup, etc., using relevant keywords. AI-Powered Relevance Scoring:** Uses OpenAI GPT to analyze each post and determine if it’s written by someone who might benefit from your product, returning a simple “yes” or “no.” Duplicate Lead Filtering with Supabase:** Ensures you don’t email the same lead more than once by storing already-processed Reddit post IDs in a Supabase table. Content Filtering:** Filters out posts with no body text or no upvotes to ensure only high-quality content is processed. Lead Storage in Google Sheets:** Saves qualified leads into a connected Google Sheet with key data (URL, post content, subreddit, and timestamp). Email Digest Alerts:** Compiles relevant leads and sends a daily digest of matched posts to your team’s inbox for review or outreach. Manual or Scheduled Trigger:** Can be manually triggered or automatically scheduled (via the built-in Schedule Trigger node). ⚙️ Tech Stack Reddit API** – For post discovery OpenAI Chat Model** – For AI-based relevance filtering Supabase** – For lead de-duplication Google Sheets** – For storing lead details Gmail API** – For sending email alerts 🔧 Customization Tips Adjust Audience**: Modify the subreddits and keywords in the initial Code node to match your market. Change the AI Prompt**: Customize the prompt in the “Analysis Content by AI” node to describe your product or service. Search Comments Instead**: To monitor comments instead of posts, change type=link to type=comment in the Reddit Search node. Change Email Recipients**: Edit the Gmail node to direct leads to a different email address or format.

Build your own Gmail and Supabase integration

Create custom Gmail and Supabase workflows by choosing triggers and actions. Nodes come with global operations and settings, as well as app-specific parameters that can be configured. You can also use the HTTP Request node to query data from any app or service with a REST API.

Gmail supported actions

Add Label
Delete
Get
Get Many
Mark as Read
Mark as Unread
Remove Label
Reply
Send
Send and Wait for Response
Create
Delete
Get
Get Many
Create
Delete
Get
Get Many
Add Label
Delete
Get
Get Many
Remove Label
Reply
Trash
Untrash

Supabase supported actions

Create
Create a new row
Delete
Delete a row
Get
Get a row
Get Many
Get many rows
Update
Update a row

FAQs

  • Can Gmail connect with Supabase?

  • Can I use Gmail’s API with n8n?

  • Can I use Supabase’s API with n8n?

  • Is n8n secure for integrating Gmail and Supabase?

  • How to get started with Gmail and Supabase integration in n8n.io?

Need help setting up your Gmail and Supabase integration?

Discover our latest community's recommendations and join the discussions about Gmail and Supabase integration.
jake chard
Jan Koch
Paul Kennard

Looking to integrate Gmail and Supabase in your company?

Over 3000 companies switch to n8n every single week

Why use n8n to integrate Gmail with Supabase

Build complex workflows, really fast

Build complex workflows, really fast

Handle branching, merging and iteration easily.
Pause your workflow to wait for external events.

Code when you need it, UI when you don't

Simple debugging

Your data is displayed alongside your settings, making edge cases easy to track down.

Use templates to get started fast

Use 1000+ workflow templates available from our core team and our community.

Reuse your work

Copy and paste, easily import and export workflows.

Implement complex processes faster with n8n

red iconyellow iconred iconyellow icon