Back to Integrations
integrationGoogle Gemini Chat Model node
integrationPostgres node

Google Gemini Chat Model and Postgres integration

Save yourself the work of writing custom integrations for Google Gemini Chat Model and Postgres and use n8n instead. Build adaptable and scalable AI, Langchain, Development, and Data & Storage workflows that work with your technology stack. All within a building experience you will love.

How to connect Google Gemini Chat Model and Postgres

  • Step 1: Create a new workflow
  • Step 2: Add and configure nodes
  • Step 3: Connect
  • Step 4: Customize and extend your integration
  • Step 5: Test and activate your workflow

Step 1: Create a new workflow and add the first step

In n8n, click the "Add workflow" button in the Workflows tab to create a new workflow. Add the starting point – a trigger on when your workflow should run: an app event, a schedule, a webhook call, another workflow, an AI chat, or a manual trigger. Sometimes, the HTTP Request node might already serve as your starting point.

Google Gemini Chat Model and Postgres integration: Create a new workflow and add the first step

Step 2: Add and configure Google Gemini Chat Model and Postgres nodes

You can find Google Gemini Chat Model and Postgres in the nodes panel. Drag them onto your workflow canvas, selecting their actions. Click each node, choose a credential, and authenticate to grant n8n access. Configure Google Gemini Chat Model and Postgres nodes one by one: input data on the left, parameters in the middle, and output data on the right.

Google Gemini Chat Model and Postgres integration: Add and configure Google Gemini Chat Model and Postgres nodes

Step 3: Connect Google Gemini Chat Model and Postgres

A connection establishes a link between Google Gemini Chat Model and Postgres (or vice versa) to route data through the workflow. Data flows from the output of one node to the input of another. You can have single or multiple connections for each node.

Google Gemini Chat Model and Postgres integration: Connect Google Gemini Chat Model and Postgres

Step 4: Customize and extend your Google Gemini Chat Model and Postgres integration

Use n8n's core nodes such as If, Split Out, Merge, and others to transform and manipulate data. Write custom JavaScript or Python in the Code node and run it as a step in your workflow. Connect Google Gemini Chat Model and Postgres with any of n8n’s 1000+ integrations, and incorporate advanced AI logic into your workflows.

Google Gemini Chat Model and Postgres integration: Customize and extend your Google Gemini Chat Model and Postgres integration

Step 5: Test and activate your Google Gemini Chat Model and Postgres workflow

Save and run the workflow to see if everything works as expected. Based on your configuration, data should flow from Google Gemini Chat Model to Postgres or vice versa. Easily debug your workflow: you can check past executions to isolate and fix the mistake. Once you've tested everything, make sure to save your workflow and activate it.

Google Gemini Chat Model and Postgres integration: Test and activate your Google Gemini Chat Model and Postgres workflow

Query Google Sheets/CSV data through an AI agent using PostgreSQL

Want to see it in action? Watch the full breakdown here: 📺 Video Link

Template Description
This n8n workflow empowers you to query structured financial data from Google Sheets or CSV files using AI-generated SQL. Unlike traditional vector database solutions that falter with numerical queries, this template leverages PostgreSQL for efficient data storage and an AI agent to dynamically create optimized SQL queries from natural language inputs.

What It Does
Retrieves data from Google Sheets or CSV files
Infers the data schema and builds a PostgreSQL table
Populates the table with your data
Uses an AI agent to translate natural language questions into SQL queries
Returns precise numerical results quickly and efficiently

Why Use This?
No SQL knowledge required—the AI generates queries for you
Bypasses the inefficiencies and costs of vector database approaches
Scales effortlessly without overwhelming the language model
Fully free and open-source

Setup Requirements

Pre-Conditions
PostgreSQL Database**: A running PostgreSQL instance (no specific extensions required beyond standard installation).
Google Sheets Access**: A publicly accessible or shared Google Sheet URL with structured data (e.g., financial records). Need a starting point? Use this Sample Google Sheet Template.
n8n Instance**: A working n8n setup with access to the Google Drive and PostgreSQL nodes.

Step-by-Step Instructions
Add Your Google Sheets URL
Open the "Google Drive Trigger" node.
Replace the placeholder URL with your Google Sheet’s link.
Verify the sheet name matches your data source.

Configure PostgreSQL
Update the "PostgreSQL" nodes with your database credentials (host, database, user, password).
The workflow automatically creates and populates the table based on your data schema.

Run the Workflow
Execute the workflow manually to set up the database.
Once initialized, use the AI agent by asking questions like:
"How much did I sell last week?"
"What were the total sales for Product X in February?"

(Optional) Automate Updates
Add a "Schedule Trigger" node to sync your Google Sheets data with PostgreSQL on a regular basis.

How It Works
Schema Detection**: The workflow analyzes your Google Sheets or CSV data to infer its structure and create an appropriate PostgreSQL table.
AI-Powered Queries**: An optimized AI agent converts your natural language questions into precise SQL queries, ensuring accurate results.
Efficient Retrieval**: By using PostgreSQL instead of vector-based methods, this template avoids common pitfalls like slow performance or inaccurate numerical outputs.

Tips for Success
Ensure your Google Sheet or CSV has consistent column headers for smooth schema detection.
Test with simple questions first to verify the AI agent’s query generation.
Check out the n8n Template Submission Guidelines for more best practices.

Nodes used in this workflow

Popular Google Gemini Chat Model and Postgres workflows

+3

Process Multiple Media Files in Telegram with Gemini AI & PostgreSQL Database

🤖📨 Telegram AI Assistant with Multi-File Media Group Handling, Smart File Processing & PostgreSQL Integration > AI-powered Telegram bot for text, voice, video, documents & media — with database-driven grouping and Telegram-safe formatting. 📋 Description This n8n template creates a next-generation Telegram AI assistant 🧠💬 capable of handling text messages, media files, and documents with advanced processing, PostgreSQL integration, and AI-powered responses. It is designed to solve Telegram’s media group challenge 📦 — when multiple files are sent together, they are stored, processed, and combined into one coherent AI-generated reply. ✨ Key Features 📂 Multi-file media group management with PostgreSQL: media_group media_queue chat_histories 📑 Document parsing for CSV, HTML, ICS, JSON, ODS, PDF (with AI fallback), RTF, TXT, XML, and spreadsheets. 🎤 Voice & video transcription for AI analysis. 🖼️ Image, audio, and video description for richer AI context. 🛡️ Telegram-safe MarkdownV2 formatting with auto-splitting for messages over 4096 chars. ⚠️ Error fallback for unsupported file types. 💡 Acknowledgment A huge thank you to Ezema Gingsley Chibuzo 🙌 for the inspiration of the first version of this workflow: Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG Your pioneering work laid the foundation for this improved, database-powered multi-modal assistant 🚀 🏷 Tags telegram ai-assistant postgresql multi-file media-group file-processing voice-transcription document-parser pdf-extraction markdown-formatting n8n-template 💼 Use Case Use this template if you need an AI-powered Telegram bot that can: 📦 Handle multiple files sent in a single message (albums, multiple PDFs, etc.). 🧾 Extract & analyze content from many file formats. 🎙️ Transcribe voice and video messages. 🗂️ Maintain chat memory for contextual AI answers. 🛡️ Avoid Telegram formatting errors and length limit issues. This workflow automates the full chain: Receive → Process → AI Analysis → Telegram-safe Reply. 💬 Example User Interactions 📄 Multiple PDFs with a caption** → AI extracts and summarizes all PDFs in one combined reply. 🎤 Voice message** → AI transcribes and replies with a contextual answer. 📊 CSV or spreadsheet file** → AI parses and summarizes the data. 🖼️ Multiple images** → AI describes each image and replies in a single message. 🔑 Required Credentials Telegram Bot API** (Bot Token) PostgreSQL** (Connection credentials) AI Provider API** (OpenAI, Google Gemini, or compatible LLM) ⚙️ Setup Instructions 🗄️ Create the PostgreSQL tables (Gray section SQL): media_group media_queue chat_histories 🔌 Configure the Telegram Trigger with your bot token. 🤖 Connect your AI provider credentials. 🗂️ Set up PostgreSQL credentials in the database nodes. ▶️ Deploy the workflow in n8n. 🎯 Start sending messages and files to your bot. 📌 Extra Notes ✅ Green section ensures only one trigger per media group. 📌 Yellow section guarantees captions and files are stored in the correct sequence. ✨ Purple section formats AI output to be Telegram-safe and split if needed. 🧠 AI prompt is not fixed, allowing full customization. 💡 Need Assistance? If you’d like help customizing or extending this workflow, feel free to reach out: 📧 Email: [email protected] 🔗 LinkedIn: John Alejandro Silva Rodríguez
+2

Scrape Google Maps by area & Generate Outreach Messages for Lead Generation

This n8n workflow automates lead extraction from Google Maps, enriches data with AI, and stores results for cold outreach. It uses the Bright Data community node and Bright Data MCP for scraping and AI message generation. How it works Form Submission User provides Google Maps starting location, keyword and country. Bright Data Scraping Bright Data community node triggers a Maps scraping job, monitors progress, and downloads results. AI Message Generation Uses Bright Data MCP with LLMs to create a personalized cold call script and talking points for each lead. Database Storage Enriched leads and scripts are upserted to Supabase. How to use Set up all the credentials, create your Postgres table and submit the form. The rest happens automatically. Requirements LLM account (OpenAI, Gemini…) for API usage. Bright Data account for API and MCP usage. Supabase account (or other Postgres database) to store information.
+3

Create a human-like Evolution API WhatsApp agent with Redis, PostgreSQL and Gemini

🤖 Human-like Evolution API Agent with Redis & PostgreSQL This production-ready template builds a sophisticated AI Agent using Evolution API that mimics human interaction patterns. Unlike standard chatbots that reply instantly to every incoming message, this workflow uses a Smart Redis Buffering System. It waits for the user to finish typing their full thought (text, audio, or image albums) before processing, creating a natural, conversational flow. It features a Hybrid Memory Architecture: active conversations are cached in Redis for ultra-low latency, while the complete chat history is securely stored in PostgreSQL. To optimize token usage and maintain long-term coherence, a Context Refiner Agent summarizes the conversation history before the Main AI generates a response. ✨ Key Features Human-like Buffering:** The agent waits (configurable time) to group consecutive messages, voice notes, and media albums into a single context. This prevents fragmented replies and feels like talking to a real person. Hybrid Memory:* Combines Redis (Hot Cache) for speed and PostgreSQL* (Cold Storage) for permanent history. Context Refinement:** A specialized AI step summarizes past interactions, allowing the Main Agent to understand long conversations without exceeding token limits or increasing costs. Multi-Modal Support:** Natively handles text, audio transcription, and image analysis via Evolution API. Parallel Processing:** Manages "typing..." status and session checks in parallel to reduce response latency. 📋 Requirements To use this workflow, you must configure the Evolution API correctly: Evolution API Instance: You need a running instance of Evolution API. Configuration Guide N8n Community Node: Install the Evolution API node in your n8n instance. n8n-nodes-evolution-api Database: A PostgreSQL database for chat history and a Redis instance for the buffer/cache. AI Models: API keys for your LLM (OpenAI, Anthropic, or Google Gemini). ⚙️ Setup Instructions Install the Node: Go to Settings > Community Nodes in n8n and install n8n-nodes-evolution-api. Credentials: Configure credentials for Redis, PostgreSQL, and your AI provider (e.g., OpenAI/Gemini). Database Setup: Create a chat_history table in PostgreSQL (columns must match the Insert node). Redis Connection: Configure your Redis credentials in the workflow nodes. Global Variables: Set the following in the "Global Variables" node: wait_buffer: Seconds to wait for the user to stop typing (e.g., 5s). wait_conversation: Seconds to keep the cache alive (e.g., 300s). max_chat_history: Number of past messages to retrieve. Webhook: Point your Evolution API instance to this workflow's Webhook URL. 🚀 How it Works Ingestion: Receives data via Evolution API. Detects if it's text, audio, or an album. Smart Buffering: Holds the execution to collect all parts of the user's message (simulating a human reading/listening). Context Retrieval: Checks Redis for the active session. If empty, fetches from PostgreSQL. Refinement: The Refiner Agent summarizes the history to extract key details. Response: The Main Agent generates a reply based on the refined context and current buffer, then saves it to both Redis and Postgres. 💡 Need Assistance? If you’d like help customizing or extending this workflow, feel free to reach out: 📧 Email: [email protected] 🔗 LinkedIn: John Alejandro Silva Rodríguez
+4

Cheaper, Faster, Accurate Answers with Memory Summarization & Dynamic Routing!

🤖💬 Smart Telegram AI Assistant with Memory Summarization & Dynamic Model Selection > Optimize your AI workflows, cut costs, and get faster, more accurate answers. 📋 Description Tired of expensive AI calls, slow responses, or bots that forget your context? This Telegram AI Assistant template is designed to optimize cost, speed, and precision in your AI-powered conversations. By combining PostgreSQL chat memory, AI summarization, and dynamic model selection, this workflow ensures you only pay for what you really need. Simple queries get routed to lightweight models, while complex requests automatically trigger more advanced ones. The result? Smarter context, lower costs, and better answers. This template is perfect for anyone who wants to: ⚡ Save money by using cheaper models for easy tasks. 🧠 Keep context relevant with AI-powered summarization. ⏱️ Respond faster thanks to optimized chat memory storage. 💬 Deliver better answers directly inside Telegram. ✨ Key Benefits 💸 Cost Optimization: Automatically routes simple requests to Gemini Flash Lite and reserves Gemini Pro only for complex reasoning. 🧠 Smarter Context: Summarization ensures only the most relevant chat history is used. ⏱️ Faster Workflows: Storing user + agent messages in a single row reduces DB queries by half and saves ~0.3s per response. 🎤 Voice Message Support: Convert Telegram voice notes to text and reply intelligently. 🛡️ Error-Proof Formatting: Safe MarkdownV2 ensures Telegram-ready answers. 💼 Use Case This template is for anyone who needs an AI chatbot on Telegram that balances cost, performance, and intelligence. Customer support teams can reduce expenses by using lightweight models for FAQs. Freelancers and consultants can offer faster AI-powered chats without losing context. Power users can handle voice + text seamlessly while keeping conversations memory-aware. Whether you’re scaling a business or just want a smarter assistant, this workflow adapts to your needs and budget. 💬 Example Interactions Quick Q&A** → Routed to Gemini Flash Lite for fast, low-cost answers. Complex problem-solving** → Sent to Gemini Pro for in-depth reasoning. Voice messages** → Automatically transcribed, summarized, and answered. Long conversations** → Context is summarized, ensuring precise and efficient replies. 🔑 Required Credentials Telegram Bot API** (Bot Token) PostgreSQL** (Database connection) Google Gemini API** (Flash Lite, Flash, Pro) ⚙️ Setup Instructions 🗄️ Create the PostgreSQL table (chat_memory) from the Gray section SQL. 🔌 Configure the Telegram Trigger with your bot token. 🤖 Connect your Gemini API credentials. 🗂️ Set up PostgreSQL nodes with your DB details. ▶️ Activate the workflow and start chatting with your AI-powered Telegram bot. 🏷 Tags telegram ai-assistant chatbot postgresql summarization memory gemini dynamic-routing workflow-optimization cost-saving voice-to-text 🙏 Acknowledgement A special thank you to Davide for the inspiration behind this template. His work on the AI Orchestrator that dynamically selects models based on input type served as a foundational guide for this architecture. 💡 Need Assistance? Want to customize this workflow for your business or project? Let’s connect: 📧 Email: [email protected] 🔗 LinkedIn: John Alejandro Silva Rodríguez

Query Google Sheets/CSV data through an AI Agent using PostgreSQL

Want to see it in action? Watch the full breakdown here: 📺 Video Link Template Description This n8n workflow empowers you to query structured financial data from Google Sheets or CSV files using AI-generated SQL. Unlike traditional vector database solutions that falter with numerical queries, this template leverages PostgreSQL for efficient data storage and an AI agent to dynamically create optimized SQL queries from natural language inputs. What It Does Retrieves data from Google Sheets or CSV files Infers the data schema and builds a PostgreSQL table Populates the table with your data Uses an AI agent to translate natural language questions into SQL queries Returns precise numerical results quickly and efficiently Why Use This? No SQL knowledge required—the AI generates queries for you Bypasses the inefficiencies and costs of vector database approaches Scales effortlessly without overwhelming the language model Fully free and open-source Setup Requirements Pre-Conditions PostgreSQL Database**: A running PostgreSQL instance (no specific extensions required beyond standard installation). Google Sheets Access**: A publicly accessible or shared Google Sheet URL with structured data (e.g., financial records). Need a starting point? Use this Sample Google Sheet Template. n8n Instance**: A working n8n setup with access to the Google Drive and PostgreSQL nodes. Step-by-Step Instructions Add Your Google Sheets URL Open the "Google Drive Trigger" node. Replace the placeholder URL with your Google Sheet’s link. Verify the sheet name matches your data source. Configure PostgreSQL Update the "PostgreSQL" nodes with your database credentials (host, database, user, password). The workflow automatically creates and populates the table based on your data schema. Run the Workflow Execute the workflow manually to set up the database. Once initialized, use the AI agent by asking questions like: "How much did I sell last week?" "What were the total sales for Product X in February?" (Optional) Automate Updates Add a "Schedule Trigger" node to sync your Google Sheets data with PostgreSQL on a regular basis. How It Works Schema Detection**: The workflow analyzes your Google Sheets or CSV data to infer its structure and create an appropriate PostgreSQL table. AI-Powered Queries**: An optimized AI agent converts your natural language questions into precise SQL queries, ensuring accurate results. Efficient Retrieval**: By using PostgreSQL instead of vector-based methods, this template avoids common pitfalls like slow performance or inaccurate numerical outputs. Tips for Success Ensure your Google Sheet or CSV has consistent column headers for smooth schema detection. Test with simple questions first to verify the AI agent’s query generation. Check out the n8n Template Submission Guidelines for more best practices.
+18

💅 AI Agents Generate Content & Automate Posting for Beauty Salon Social Media 📲

💅 AI Agents Generate Content & Automate Posting for Beauty Salon Social Media 📲 Who Is This For? This workflow is for beauty salons who want consistent, high‑quality social media content without writing every post manually. It also suits agencies and automation builders who manage multiple beauty brands and want a reusable, AI‑driven posting system they can adapt per client. What Problem Is This Workflow Solving? Many beauty businesses struggle to post regularly because research, copywriting, and design all take time and marketing skills. This workflow automates research, writing, image creation, and posting, so your channels stay active and relevant while you focus on clients and services. What This Workflow Does Generates short, engaging posts tailored to a beauty‑salon audience (hair, nails, skincare, make‑up, self‑care) using an AI agent. Uses Tavily Internet Search to pull up‑to‑date information and trends based on a reference link or topic. Turns each post into a detailed, photorealistic image prompt and creates a matching visual with an AI image model (for example, gpt‑image‑1 or other connected providers). Automatically sends the final text and image to Telegram, and can be extended to other social platforms from the Split Out node. How It Works Trigger the workflow Scheduled automatic generation:** Run the parent workflow on a schedule (for example, once per day at 9 AM) to publish new content regularly. Google Sheets trigger:** Generate content when a new row with a reference link or topic is added to your sheet. Use it when you manage ideas or briefs in Google Sheets and want the workflow to react as soon as a new idea appears. RSS Feed trigger:** Start the workflow when new items appear in a selected RSS feed. Ideal for turning fresh blog posts, news, or industry updates into social media content or automated summaries. Meta (Facebook/Instagram) webhook:** Use the Meta Reference trigger to fire the workflow on incoming webhooks from Meta (for example, new comments, messages, or events). Helpful when you want to auto‑respond, log activity, or generate follow‑up content from Meta activity. Airtable trigger:** Start the workflow when records in a selected Airtable base/table change (for example, a new idea, brief, or status update), so your posts react instantly to updates in your Airtable content board. Postgres trigger:** Fire the workflow when new rows are inserted or existing rows are updated in a connected PostgreSQL table, letting you drive content generation from events in your app database or Supabase‑style back end. Manual start:* Hit Execute workflow* whenever you want to spin up a batch of posts on demand, test new prompt settings, or debug the flow step by step. Research and generate copy The GENERATE TEXT agent calls Tavily to gather fresh information on the topic. It writes a post under 1024 characters with a hook, practical tips, relevant hashtags, and a closing line with your salon address and contact. Create the visual The GENERATE PROMPT agent converts the post into a single, clear description of the scene (client, service, salon interior, lighting, mood) with a strict “no text on image” rule. An image model such as gpt‑image‑1 or one of the HTTP image APIs renders a matching beauty visual. Distribute the content The Split Out node fans out the result so Telegram receives a photo post with the generated caption. Additional social or content nodes (for example Facebook, LinkedIn, X, template tools) can be wired after this step for multi‑channel posting. How to Customize This Workflow to Your Needs Brand voice** Edit the system message in the GENERATE TEXT node to adjust tone (luxury, friendly, clinical, playful), language, services, and city. Update the final address and phone line to match your salon details. Topics and triggers** Point the Google Sheets Trigger to your own document ID, sheet, and columns for ideas, links, or campaign themes. Use the Schedule Trigger for fully automatic posting or rely on the Manual Trigger for controlled, batch generation sessions. Models and providers** Swap GPT‑5 llm and the default image model for alternatives such as Mistral, Gemini, Anthropic, DeepSeek, or custom HTTP image APIs by enabling the corresponding nodes and adding credentials. Channels and outputs** Connect or remove social nodes after Split Out depending on which platforms you actively use. Add extra processing steps (for example resizing images or adding UTM parameters) before each channel if needed. Visual style** Tweak the GENERATE PROMPT instructions to control composition (close‑up vs. full‑body), color palette, lighting, and overall salon aesthetic, while keeping the constraint of no text or logos in the image.

Build your own Google Gemini Chat Model and Postgres integration

Create custom Google Gemini Chat Model and Postgres workflows by choosing triggers and actions. Nodes come with global operations and settings, as well as app-specific parameters that can be configured. You can also use the HTTP Request node to query data from any app or service with a REST API.

Postgres supported actions

Delete
Delete an entire table or rows in a table
Execute Query
Execute an SQL query
Insert
Insert rows in a table
Insert or Update
Insert or update rows in a table
Select
Select rows from a table
Update
Update rows in a table
Use case

Save engineering resources

Reduce time spent on customer integrations, engineer faster POCs, keep your customer-specific functionality separate from product all without having to code.

Learn more

FAQs

  • Can Google Gemini Chat Model connect with Postgres?

  • Can I use Google Gemini Chat Model’s API with n8n?

  • Can I use Postgres’s API with n8n?

  • Is n8n secure for integrating Google Gemini Chat Model and Postgres?

  • How to get started with Google Gemini Chat Model and Postgres integration in n8n.io?

Need help setting up your Google Gemini Chat Model and Postgres integration?

Discover our latest community's recommendations and join the discussions about Google Gemini Chat Model and Postgres integration.
Mikhail Savenkov
Honza Pav
Vyacheslav Karbovnichy
Dennis
Dennis

Looking to integrate Google Gemini Chat Model and Postgres in your company?

Over 3000 companies switch to n8n every single week

Why use n8n to integrate Google Gemini Chat Model with Postgres

Build complex workflows, really fast

Build complex workflows, really fast

Handle branching, merging and iteration easily.
Pause your workflow to wait for external events.

Code when you need it, UI when you don't

Simple debugging

Your data is displayed alongside your settings, making edge cases easy to track down.

Use templates to get started fast

Use 1000+ workflow templates available from our core team and our community.

Reuse your work

Copy and paste, easily import and export workflows.

Implement complex processes faster with n8n

red iconyellow iconred iconyellow icon