Jacob Rafal
Jacob Rafal
ProjectsAboutBlogComing SoonContact
Contact me
Jacob Rafal
GitHubLinkedInEmailCalendly

Β© 2025 Jacob Rafal

Jacob Rafal
Jacob Rafal
ProjectsAboutBlogComing SoonContact
Contact me
Back to Projects

n8n Workflow Orchestration Platform

Designed and deployed 20+ enterprise automation workflows using n8n. Integrated LLMs, APIs, and databases for no-code AI solutions. Live implementation powers this portfolio's AI assistant.

n8nLangChainPostgreSQLAPIsNo-CodeWorkflow AutomationGroqRAG
πŸ”΄

Live Implementation Running Now!

The workflows shown below are actively powering the AI assistant on this portfolio. Scroll down to try it yourself!

Workflow Architecture

πŸ€–AI Portfolio Assistant Workflow

This workflow processes every message sent to the AI assistant on this portfolio. It handles authentication, RAG retrieval, LLM processing with Groq, and maintains conversation history.

AI Portfolio Assistant n8n Workflow

Live n8n workflow running on Railway

Input Processing

Webhook validates authentication and extracts message data

AI Processing

Groq LLM (Llama 3.3 70B) with RAG-enhanced context

Response Time

Average 1-2 seconds end-to-end processing

πŸ”RAG Embeddings Pipeline

Handles document ingestion and semantic search. Creates vector embeddings for portfolio content, enabling intelligent context retrieval for accurate AI responses.

RAG Embeddings Pipeline n8n Workflow

Document ingestion and vector embeddings workflow

⚑Infrastructure

  • β€’ n8n self-hosted on Railway (99.9% uptime)
  • β€’ PostgreSQL for conversation memory
  • β€’ Supabase Vector Store for RAG
  • β€’ API key authentication
  • β€’ Sub-2 second response times

πŸ€–AI Components

  • β€’ Groq LLM (Llama 3.3 70B Versatile)
  • β€’ Google Gemini embeddings
  • β€’ RAG-enhanced responses
  • β€’ Context-aware conversations
  • β€’ 95%+ accuracy on portfolio queries

About This Project

Live production implementation of n8n workflow automation powering the AI assistant on this portfolio. This project demonstrates expertise in no-code AI orchestration, integrating LLMs, vector databases, and conversation memory to create an intelligent, context-aware assistant with RAG-enhanced responses and sub-2-second response times.

πŸ“‹Overview

Built two complementary n8n workflows that work together to power this portfolio's AI assistant: one for real-time chat processing with RAG retrieval, and another for document ingestion and vector embeddings. Self-hosted on Railway with PostgreSQL for conversation memory and Supabase for vector storage, achieving 99.9% uptime and consistent performance.

Workflow Architecture

πŸ€–AI Portfolio Assistant

Main chat processing workflow:

  • β€’Webhook trigger with API key authentication
  • β€’AI Agent node with Groq LLM (Llama 3.3 70B)
  • β€’RAG context retrieval from Supabase
  • β€’PostgreSQL conversation memory
  • β€’Response formatting and delivery

πŸ”RAG Embeddings Pipeline

Document processing workflow:

  • β€’Portfolio content ingestion and parsing
  • β€’Text chunking with optimal overlap
  • β€’Google Gemini embeddings generation
  • β€’Vector storage in Supabase
  • β€’Semantic search capability

Performance Metrics

⚑Response Performance

End-to-end response1-2s
Query accuracy95%+
RAG retrieval<500ms
Memory persistenceReal-time

πŸ›‘οΈSystem Reliability

Uptime (Railway)99.9%
Error handlingAuto-retry
SecurityAPI Key Auth
Chat historyPersistent

Key Features

🎯

Intelligent Context Retrieval

RAG system retrieves relevant portfolio content using semantic search for accurate, contextual responses.

πŸ’¬

Conversation Memory

PostgreSQL-backed memory maintains context across messages for natural multi-turn conversations.

πŸš€

Fast LLM Inference

Groq's lightning-fast Llama 3.3 70B model delivers quality responses in under 2 seconds.

πŸ”§Technical Implementation

No-Code AI Orchestration

Leveraged n8n's visual workflow builder to create complex AI pipelines without traditional coding.

RAG Architecture

Implemented retrieval-augmented generation using Supabase vector store and Google Gemini embeddings.

LangChain Integration

Used LangChain agent nodes for orchestrating LLM calls, memory management, and tool usage.

Production Deployment

Self-hosted n8n on Railway with PostgreSQL and Supabase, achieving enterprise-grade reliability.

πŸ’‘Real-World Impact

This live implementation showcases practical expertise in n8n workflow automation, AI orchestration, and no-code developmentβ€”directly applicable to building production-grade GenAI solutions for enterprise applications.

Workflow Configuration Example

{
  "nodes": [
    {
      "name": "Webhook Trigger",
      "type": "n8n-nodes-base.webhook",
      "parameters": {
        "path": "ai-assistant",
        "authentication": "headerAuth"
      }
    },
    {
      "name": "AI Agent",
      "type": "@n8n/n8n-nodes-langchain.agent",
      "parameters": {
        "model": "groq:llama-3.3-70b",
        "systemPrompt": "Portfolio assistant context...",
        "tools": ["vectorStore", "memory"]
      }
    },
    {
      "name": "Vector Store",
      "type": "@n8n/n8n-nodes-langchain.vectorStoreSupabase",
      "parameters": {
        "tableName": "documents",
        "embeddings": "gemini"
      }
    }
  ]
}

Try the Live Implementation

Experience the n8n workflows in action. This chat assistant uses the exact workflows shown above.

AI Portfolio Assistant

Audio

Ready

Chat

Start speaking or type a message below...

Jacob Rafal
GitHubLinkedInEmailCalendly

Β© 2025 Jacob Rafal