Portfolio
- Atharva Devasthali
- Coder
- Web Developer
- ML/AI Developer
About me
Hi There! I'm Atharva Devasthali
I am a Software Engineer with 3+ years of expertise in full-stack web development, specializing in TypeScript, React, and Angular on the frontend. My backend proficiency spans Python frameworks — Flask, Django, and FastAPI — alongside Go (Gin) for building robust, scalable server-side systems.
I have a strong focus on AI-powered automation, RAG systems, and workflow orchestration using tools like n8n, integrating LLMs to build intelligent, scalable systems. My experience spans Machine Learning, DevOps, and delivering production-grade full-stack solutions. With a strong foundation in frontend development, I bring a user-centric perspective to every layer of the stack — from pixel-perfect interfaces to intelligent backend systems.
Specialties
-
Web Development
-
Machine Learning & AI
-
UI / UX
-
Software Development
Projects
These are the projects which I have worked on or I'm currently working on, independently or collaboratively within a team.
StackPilot - Self-Hosted DevOps Control Plane
RAG-Powered Email Personalization System
Automated Video Generator with Workflow Orchestration
DashTro - Headless CMS
Projects
These are the projects which I have worked on or I'm currently working on, independently or collaboratively within a team.
StackPilot - Self-Hosted DevOps Control Plane
Automated Video Generator with Workflow Orchestration
RAG-Powered Email Personalization System
DashTro - Headless CMS
StackPilot - Self-Hosted DevOps Control Plane

System Overview
A full-stack platform for managing application deployments, secrets, and build configuration—built as a self-hosted alternative to Render or Railway. StackPilot provides encrypted secrets management, role-based access control, and Git webhook integration enabling teams to deploy and manage services without expensive managed platforms.
Architecture
Three-Level Deployment Hierarchy
Organized structure modeled after modern deployment platforms: Project → Environment → Service
- Projects: Top-level organizational units (e.g., "E-commerce Platform")
- Environments: Deployment targets (development, staging, production)
- Services: Individual components (web servers, databases, static sites) with independent secrets and build configurations
Secrets Management
AES-256-GCM Encrypted Storage
Production-grade encryption protecting sensitive credentials:
- Secrets encrypted before database storage using AES-256-GCM
- Encryption key auto-generated at first startup, stored separately from database
- List endpoints return metadata only (names, timestamps)—no values exposed
- Individual fetch decrypts on demand—values returned only on explicit request
- Git integration for sourcing secrets from encrypted repository files
Build & Deploy Configuration
Per-Service Build Management
Each service maintains independent configuration:
- GitHub repository URL with encrypted personal access token
- Branch specification and root directory (monorepo support)
- Build and start commands
- Auto-generated UUID webhook URL for GitHub integration
GitHub Webhook Integration
- Unique webhook secret per service (UUID auto-generated)
- Receive push events triggering automated deployments
- Secure trigger mechanism preventing unauthorized deploys
Authentication & Authorization
Multi-Layer Security
JWT Authentication:
- 24-hour token expiry with HTTP-only secure cookies
- HS256 signing preventing token tampering
- Owner: Full platform control
- Admin: Project-level management
- User: Read access to assigned projects
Project-Scoped API Keys:
- 32-byte hexadecimal keys for CI/CD integrations
- 7-day expiration with project-level scoping
- Revocable tokens for instant access termination
- Secure token generation via Resend API
- bcrypt-hashed tokens with 1-hour expiry
REST API
25+ Endpoints Across Permission Tiers
- Public: Authentication, registration, webhooks
- User-level: View projects, environments, services, secrets (metadata)
- Admin: Create/modify/delete resources with cascading cleanup
Cascading Deletion: Delete Project → removes environments → services → secrets → configs. API-layer enforcement ensuring data consistency.
Frontend Dashboard
React Management Interface
Full CRUD operations for the entire hierarchy:
- Create/view/edit/delete projects with environment overview
- Add environments with service listings
- Deploy web servers, databases, static sites
- Manage secrets with show/hide toggle for sensitive values
- Edit build configuration with real-time validation
State Management:
- Zustand stores scoped by feature domain
- Context-keyed stores (projectId/envName) supporting multiple concurrent contexts
- Optimistic updates with devtools integration
Technical Stack
- Backend: Go, Gin, GORM, SQLite, AES-256-GCM, JWT (HS256), bcrypt
- Frontend: React 18, TypeScript, Vite, Zustand, Tailwind CSS, shadcn/ui, Radix UI, react-router-dom v7
- Component System: Custom UI library (Button, Input, Card, Modal, Toast) with Storybook documentation and Vitest tests
Technical Highlights
- Security-First API Design: List endpoints return metadata only—secret values never exposed in bulk operations. Individual GET requests decrypt specific secrets on demand. GORM Find + RowsAffected pattern suppresses noisy "record not found" error logs.
- Cascade Delete Logic: API-layer enforcement with explicit ordering (services → secrets → configs). Transaction-wrapped preventing partial deletions. Audit logging of each deletion step.
- UUID Webhook Secrets: Auto-generated on config save (never user-supplied). 128-bit entropy preventing prediction attacks. Encrypted storage with separate encryption key.
Self-Hosting Benefits
Cost Efficiency:
- Render/Railway: $20–50/month per user
- StackPilot self-hosted: $6–12/month total
- Annual savings: $200–600 for small teams
- Complete control over deployment credentials
- No third-party access to source code
- Compliance-friendly for regulated industries
Key Challenges & Solutions
- Preventing Secret Exposure in Logs: Metadata-only list endpoints, explicit decrypt-on-demand pattern, sanitized error messages never including secret values.
- Cascade Deletion Complexity: API-layer enforcement with explicit ordering, transaction wrapping enabling rollback on failures, clear error messages about deletion impacts.
- Webhook Security: UUID-based auto-generated webhook URLs (128-bit entropy), encrypted storage, never user-supplied preventing weak values.
- Frontend State Across Nested Hierarchy: Zustand stores keyed by context enabling simultaneous multi-project management, optimistic updates with rollback on API failure.
Key Takeaways
StackPilot demonstrates building production-grade DevOps tooling with modern security practices (AES-256-GCM, JWT, RBAC) while maintaining developer-friendly workflows. The platform proves self-hosted alternatives to expensive managed services are viable—strategic architecture delivers enterprise features at a fraction of SaaS costs.
Security: AES-256-GCM encryption-at-rest, JWT (HS256), RBAC, bcrypt
API: 25+ endpoints, 3-tier hierarchy, cascading deletes, metadata-only list endpoints
Economics: $6–12/month self-hosted vs. $20–50/month/user managed platforms
Status: Active development, core features complete
Automated Video Generator with Workflow Orchestration

System Overview
A fully automated content creation and streaming platform that transforms job listings from major employment portals into professionally narrated video streams. Operating 24/7 with zero manual intervention, the system processes 120 job listings daily, generates 12 half-hour videos (6 hours of fresh content), and maintains continuous YouTube streaming—all on a $6/month infrastructure budget achieving 98% cost savings vs. commercial alternatives.
Content Pipeline Architecture
Multi-Stage Automation Flow
Stage 1: Content Aggregation
Automated extraction from six major platforms: LinkedIn, Indeed, Naukri, Internshala, Foundit, and Monster via Google Custom Search API (100 free requests/day). Strategic batching and platform rotation consistently delivers 120 jobs daily within free-tier limits.
Daily Volume: 120 jobs → 12 videos (10 jobs each) → 6 hours new content + 6 hours replay = 12-hour streaming cycle
Stage 2: AI Script Generation
GPT-4o Mini transforms raw job data into engaging 30-minute narratives with engineered prompts ensuring consistent tone, structure, and information density. Automated validation checks length, coherence, and quality before proceeding.
Stage 3: Audio Synthesis
Kokoro TTS (Hugging Face open-source) converts scripts to natural narration—self-hosted eliminating the $300-500/month cost of commercial TTS services while providing unlimited generation capacity.
Stage 4: Visual Pipeline
Three-step conversion ensuring compatibility and quality:
- PowerPoint automation creates slides from job data with dynamic layouts
- LibreOffice headless renders PPTX to high-fidelity PDF
- WebP export produces optimized video frames (30-40% smaller than PNG)
Stage 5: Video Composition
FFmpeg merges audio with visuals creating 30-minute segments (1080p, H.264, optimized for streaming). Parallel processing generates 3-4 videos concurrently reducing total cycle time.
Stage 6: Stream Assembly
Videos combine into daily schedule: 12 videos (6 hours new + 6 hours replay) maintaining continuous viewer engagement and maximizing content utilization.
Live Streaming Infrastructure
OBS WebSocket Control
Python services programmatically manage OBS Studio via WebSocket:
- Generate unique stream IDs and configure scenes dynamically
- Set encoding parameters (1080p, 6000kbps, x264)
- Add overlays, backgrounds, and branded graphics per session
- Establish YouTube RTMP connection with authentication
Priority Queue System
Intelligent playback management:
- High Priority: Breaking opportunities, urgent hiring
- Standard Priority: Daily new content
- Fill Priority: Replay content preventing dead air
YouTube Integration
Simultaneous live streaming and video archival with automated metadata generation (titles, descriptions, tags, thumbnails) derived from content data.
Monitoring & Alerting
Failed Node Detection
Real-time monitoring tracks every workflow node with immediate email alerts on failure including:
- Failure context, error diagnostics, impact assessment
- Affected content and recommended remediation actions
- Alert routing to appropriate teams (data, content, infrastructure, operations)
Recovery Workflow
- Automatic retry with exponential backoff (3 attempts)
- Alternative approaches (skip item, use cached content, fallback generation)
- Human escalation with pre-compiled diagnostics if automated recovery fails
Reliability Metrics: 99.5% uptime, 85% automated recovery rate, <2 min alert latency, <15 min MTTR
Orchestration Architecture
n8n Workflow Engine (self-hosted on Contabo VPS $5-6/month)
Coordinates scheduled triggers, data transformations, conditional logic, error handling, state management, and webhook integrations—all through visual workflows with comprehensive monitoring.
Custom FastAPI Microservices
Extend n8n capabilities:
- TTS Service: Kokoro inference, voice profiles, audio normalization
- FFmpeg Service: Video encoding, concatenation, format conversion
- Visual Pipeline Service: PPTX → PDF → WebP orchestration with validation
- OBS Control Service: WebSocket communication, stream configuration, health monitoring
Cost Efficiency
Total Monthly Cost: $10-15 (98% reduction vs. commercial)
- Contabo VPS: $5-6/month (8 vCPU, 16GB RAM, 400GB NVMe)
- GPT-4o Mini API: ~$3-5/month (3,600 scripts monthly)
- Kokoro TTS: $0 (self-hosted vs. $300-500/month commercial)
- Google Search API: $0 (100 free requests/day, optimized)
- YouTube Streaming: $0 (free platform)
Performance Characteristics
Daily Processing:
- 120 job listings extracted and processed
- 12 videos generated (30 minutes each)
- 6 hours new content produced
- 98% success rate extraction → streaming
- 4-6 hour end-to-end latency
System Efficiency:
- Parallel processing: 3-4 concurrent video generations
- 99.5% uptime with automated failover
- <15 minute recovery time for failures
- 360 videos/month (3,600 jobs presented)
Technical Innovations
- API Optimization: Batch requests, platform rotation, smart caching, and deduplication extract 120 jobs from 100 free API requests daily
- Quality Assurance: Multi-stage validation (script, audio, visual, video) maintains 98% success rate with automatic retry and fallback strategies
- Resource Optimization: Batch processing, intelligent caching (30-40% speedup for repeated content), off-peak scheduling (overnight processing)
- Streaming Reliability: Health monitoring every 30 seconds, automatic restart on degradation (2-3 min recovery), queue persistence preventing data loss
Key Challenges & Solutions
- API Rate Limits → Batch requests (10-15 jobs per call), platform rotation, supplemental RSS feeds
- Script Quality → Engineered prompts, multi-stage validation, automatic regeneration (3 attempts)
- Visual Pipeline → Three-step conversion with validation, PDF intermediary ensuring consistency
- Stream Continuity → Priority queue with intelligent replay preventing dead air during failures
- Cost Management → Self-hosted Kokoro TTS, Contabo VPS, free-tier APIs (98% savings)
- Monitoring Visibility → Comprehensive error handlers, email alerts with diagnostics, 85% automated recovery
Technical Stack
- Orchestration: n8n (self-hosted), Contabo VPS ($6/month)
- Backend: FastAPI, Python
- AI: GPT-4o Mini, Kokoro TTS (Hugging Face)
- Processing: FFmpeg, LibreOffice (headless), PowerPoint automation, WebP
- Streaming: OBS Studio, OBS WebSocket, YouTube APIs
- Data Sources: Google Custom Search API, LinkedIn, Indeed, Naukri, Internshala, Foundit, Monster
- Monitoring: Email alerts, health checks, automated recovery
- Storage: PostgreSQL, Docker containerization
Business Impact
Production Scale:
- 3,600 jobs presented monthly (120/day × 30)
- 360 videos produced monthly (12/day × 30)
- 180 hours new content generated monthly
- Fully autonomous 24/7 operation
Operational Excellence:
- 98% automation success rate
- $540-685 monthly savings (98% cost reduction)
- 99.5% uptime with <15 min MTTR
- Zero manual intervention required
Future Enhancements
- Content Intelligence: Trend analysis, personalized streams by job category, A/B testing, real-time adjustment based on viewer metrics
- Advanced Features: Multi-language support, dynamic thumbnails, automated chapters, real-time subtitles, voice cloning
- Infrastructure: GPU acceleration (50-70% faster processing), distributed queue system, CDN integration, multi-stream support
- Monitoring: Predictive failure detection, automated remediation, Slack/Discord integration, performance degradation alerts
Key Takeaways
This project demonstrates production-grade automation orchestrating AI content generation, open-source TTS, media processing, and live streaming into a fully autonomous platform operating at 1/50th the cost of commercial solutions.
Achievement: Built a system handling production workloads (360 videos/month, 99.5% uptime) on a $6/month VPS—proving sophisticated automation doesn't require enterprise budgets when properly architected.
Value: Transforms hours of manual work into automated processing delivering 3,600 job opportunities monthly with 98% success rate and comprehensive monitoring—serving as a blueprint for cost-effective, large-scale content automation.
Scale: 120 jobs/day, 12 videos/day, 6 hours content/day, 360 videos/month
Economics: $10-15/month (98% savings vs. $550-700/month commercial)
Reliability: 99.5% uptime, 98% success rate, <2 min alerts, <15 min MTTR
Status: Production deployment, actively streaming, continuous optimization
RAG-Powered Email Personalization System

System Overview
An intelligent email automation platform that generates personalized weekly newsletters by extracting relevant information from company documents and tailoring messages to individual recipients at scale. The system eliminates manual email writing, reduces campaign preparation from hours to minutes, and enables a small team to manage mass outreach with comprehensive approval workflows and delivery tracking.
The Problem
Managing weekly customer outreach newsletters presented significant bottlenecks:
- Manual email writing consuming hours per campaign researching company updates from scattered documents
- Personalization at scale requiring individual customization for different recipient segments
- Small team capacity limiting outreach volume and consistency
- Data scattered across PDFs, Word docs, Excel sheets making relevant information hard to extract
- Quality control needed before sending to customer base
Solution Architecture
RAG-Based Intelligence Pipeline
The platform implements Retrieval-Augmented Generation ensuring emails contain accurate, relevant company information rather than AI hallucinations.
Document Knowledge Base
Ingests and processes company documents creating a searchable semantic database:
- Supported formats: PDF, DOCX, Excel (product updates, case studies, announcements, internal reports)
- Vector storage: Supabase pgvector for efficient semantic search
- Embedding generation: Google Gemini text-embedding-004 model
- Chunking strategy: Intelligent document splitting preserving context and relationships
Personalization Engine
For each recipient, the system:
- Retrieves context from knowledge base based on email topic and recipient profile (industry, past interactions, interests)
- Generates content using Google Gemini with retrieved documents as grounding context
- Personalizes based on recipient data from Excel sheets (name, company, role, previous engagement)
- Validates output for tone consistency, length constraints, and factual accuracy
Workflow & Approval System
n8n Orchestration
The entire process runs through custom n8n workflows providing visual oversight and control.
Email Generation Workflow:
- Recipient import from Excel sheets (names, emails, companies, segments)
- Topic definition specifying email purpose and key messages
- RAG retrieval pulling relevant company information from vector database
- Gemini generation creating personalized drafts for each recipient
- Preview compilation assembling all emails for review
Telegram Approval Integration
Rather than email-based review, the system uses Telegram for mobile-friendly approval.
Approval Flow:
- Draft notification sent to approver via Telegram with campaign summary
- Gmail draft automatically created in approver's inbox for detailed review
- n8n preview panel displays all generated emails with recipient details
- Inline approval buttons in Telegram (Approve All, Review Individual, Reject)
- Single approver reviews and authorizes before mass sending
Average approval time: <30 minutes from generation to authorized send
Gmail Integration & Delivery
Mass Email Distribution
Once approved, the system orchestrates personalized sending via Gmail API.
Gmail Node Features:
- Personalized sending: Individual emails to each recipient (not BCC mass mail)
- Read tracking: Gmail read receipts monitoring email opens
- Follow-up triggers: Automated reminders for unopened emails after 3-5 days
- Rate limiting: Respects Gmail sending limits (500 emails/day) with queue management
- Error handling: Failed sends automatically retry, log issues, alert team
Recipient Management: Excel sheets track contact details, send history, open status, and follow-up needs. n8n nodes update sheets post-send with delivery status and engagement metrics. Follow-up workflow automatically identifies unopened emails triggering gentle reminder campaigns.
Engagement Visibility: Real-time dashboard in n8n showing open rates, pending follow-ups, and campaign performance.
Technical Architecture
n8n Workflow Orchestration
Custom nodes coordinate the entire pipeline:
- Excel integration importing recipient data, updating delivery status
- Supabase vector queries retrieving relevant document sections
- Gemini API calls generating personalized content with context
- Gmail operations creating drafts, sending emails, tracking reads
- Telegram webhook handling approval interactions
- Conditional logic routing based on approval status, send quotas, error conditions
Supabase pgvector Implementation
PostgreSQL with pgvector extension provides scalable semantic search:
- Document chunks stored with embeddings (1536 dimensions, Gemini model)
- Similarity search finding top-k relevant sections for each email topic
- Metadata filtering by document type, date, department, relevance
- RLS policies ensuring secure document access control
Google Gemini Integration
Dual usage for efficiency:
- Embeddings (text-embedding-004): Convert document chunks and queries to vectors
- Generation (gemini-pro): Create email content with retrieved context as grounding
Key Features
Intelligent Content Creation
- Context-aware emails: System retrieves relevant company updates, product launches, case studies matching recipient industry and interests
- Factual grounding: RAG architecture prevents hallucinations by anchoring generation in actual company documents
- Tone consistency: Maintains professional brand voice across all personalized variations
- Length optimization: Targets ideal newsletter length balancing information density and readability
Approval & Quality Control
- Visual preview: n8n interface displays all generated emails before sending
- Gmail draft review: Approver sees exact email format and content in familiar Gmail interface
- Telegram mobile workflow: Quick approval from anywhere without desktop access
- Edit capability: Approver can modify drafts in Gmail before final authorization
Scalability
- Mass personalization: Generate unique emails for 100+ recipients in <10 minutes
- Queue management: Respects Gmail limits while maximizing throughput
- Error resilience: Failed sends retry automatically without losing data
- Weekly cadence: Supports consistent newsletter schedule with minimal manual effort
Business Impact
Time Efficiency:
- Before: 3-4 hours per weekly newsletter (research updates, write emails, personalize, send)
- After: 30-45 minutes per campaign (define topic, review drafts, approve, automated send)
- Time savings: ~80% reduction in campaign preparation effort
Team Productivity:
- Small team enablement: 1-2 people manage weekly outreach to 100+ customers
- Consistent cadence: Reliable weekly newsletters previously impossible with manual process
- Reduced bottlenecks: Automation eliminates research and writing delays
Communication Quality:
- Improved personalization: Each email tailored to recipient context (vs. one-size-fits-all template)
- Factual accuracy: RAG grounding ensures company information correctness
- Professional consistency: Maintained brand voice across all communications
Technical Challenges & Solutions
- Document Knowledge Accuracy → RAG architecture retrieves actual document sections rather than relying on LLM memory. Gemini generates emails grounded in retrieved text preventing factual errors or outdated information.
- Personalization at Scale → Batch processing generates 100+ unique emails in parallel while maintaining individual context. Excel integration provides recipient data (industry, past interactions) informing personalization strategy.
- Approval Workflow Speed → Telegram integration enables mobile-first approval without desktop email access. Interactive buttons provide instant authorization while Gmail drafts offer detailed review when needed. Average approval time reduced from hours to <30 minutes.
- Gmail Sending Limits → Intelligent queue management respects 500 emails/day limit, distributes large campaigns across multiple days, implements exponential backoff for rate limit errors, and provides clear progress visibility.
- Follow-up Management → Automated read tracking updates Excel sheets with open status. n8n workflow identifies unopened emails after 3-5 days, generates gentle follow-up content, and queues reminder campaigns—eliminating manual tracking burden.
Technical Stack
- Workflow Orchestration: n8n (self-hosted, custom nodes)
- Vector Database: Supabase (PostgreSQL + pgvector extension)
- AI Models: Google Gemini (text-embedding-004, gemini-pro)
- Email Platform: Gmail API (sending, drafts, read tracking)
- Approval Interface: Telegram Bot API (webhooks, interactive buttons)
- Recipient Management: Excel integration (n8n nodes, automated updates)
- Document Processing: PDF parsing, DOCX extraction, Excel reading
- Infrastructure: Contabo VPS (self-hosted n8n instance)
- Deployment: Docker containerization, automated workflows
Operational Metrics
Weekly Newsletter Cadence:
- 100+ personalized emails generated per campaign
- 30-45 minute total campaign time (generation + approval + send)
- <30 minute average approval turnaround
- 80% time reduction vs. manual process
System Performance:
- Email generation: ~5-10 minutes for 100 recipients
- RAG retrieval: <2 seconds per query (vector search + embedding)
- Approval workflow: Mobile-accessible, real-time status updates
- Delivery tracking: Automated read monitoring, follow-up identification
Future Enhancements
- Content Intelligence: A/B testing different subject lines, content structures, CTAs; engagement analysis identifying high-performing topics; sentiment analysis on recipient replies
- Advanced Automation: Multi-language support, dynamic content blocks per recipient, automated scheduling based on recipient timezone, smart follow-up sequences with varying content based on engagement level
- Enhanced Personalization: CRM integration pulling richer recipient context, behavioral triggers (product usage, renewal dates) initiating targeted emails, industry-specific content recommendations
- Analytics Dashboard: Real-time campaign performance metrics, historical trend analysis, recipient segmentation insights, ROI tracking connecting outreach to business outcomes
Key Takeaways
This project demonstrates intelligent automation combining RAG architecture, workflow orchestration, and thoughtful approval processes to solve a real business problem: enabling small teams to maintain personalized, consistent customer communication at scale.
Technical Achievement: RAG implementation ensures emails contain accurate company information grounded in actual documents rather than generic AI-generated content—critical for maintaining professional credibility and brand trust.
Operational Value: 80% time reduction transforming weekly newsletter preparation from 3-4 hour manual effort to 30-45 minute supervised automation—enabling consistent outreach previously impossible with team capacity.
User-Centric Design: Telegram approval workflow recognizes that mobile-accessible, instant authorization matters more than elaborate review interfaces—pragmatic engineering serving actual team workflows rather than technical complexity for its own sake.
The system proves that thoughtfully designed automation doesn't replace human judgment—it amplifies it, handling tedious research and writing while preserving quality control and strategic oversight.
DashTro - Headless CMS

System Overview
A headless CMS that lets users define custom data schemas, create named collections based on those schemas, and manage documents within each collection. DashTro separates content structure from content delivery—schemas describe the shape of data, collections namespace instances of that shape, and documents hold the actual content, all served through a clean REST API.
Architecture
Monorepo — Frontend + Backend
The project is split into two independent packages:
- cms-frontend: React 18 + TypeScript SPA (Vite), communicating with the backend over Axios
- cms_backend: Django 5 + Django REST Framework API with PostgreSQL JSONB storage
Schema Builder
User-Defined Data Structures
Users define schemas with typed fields before creating any content:
- Supported types: String, Number, Boolean, NestedDoc, ReferenceDoc
- Schema names are validated as PascalCase; field names are enforced as snake_case — constraints applied at the serializer level so invalid shapes are rejected before reaching the database
- Schemas are stored in the
cms_schemaJSONB table, making the field definitions themselves queryable data
Collections & Documents
Namespaced Content Management
Collections link a named workspace namespace to a schema, providing isolation between content types:
- Each collection is tied to a schema, constraining which fields its documents can contain
- Documents are created with auto-generated IDs; their forms are dynamically generated at runtime from the linked schema definition—no hardcoded form fields anywhere in the frontend
- All document content is stored in the
cms_workspace_dataJSONB table, allowing arbitrary schema evolution without database migrations for individual field changes
API Design
RESTful Endpoints Across Three Resource Tiers
- Auth:
/api/cms/auth/— JWT login and token management - Schema:
/api/cms/schema/(list/create),/api/cms/schema/<id>/(retrieve/update/delete) - Collections:
/api/cms/collections/(list/create),/api/cms/collections/<id>/(update/delete) - Documents:
/api/cms/workspace/<ws>/collection/<col>/(list/create),/api/cms/workspace/<ws>/collection/<col>/document/<id>/(retrieve/update/delete)
Data Layer
PostgreSQL JSONB for Schema-Flexible Storage
All four core tables use JSONB columns:
- cms_schema — field definitions per schema
- cms_schema_collections — collection configs linking workspaces to schemas
- cms_workspace_data — document content
- cms_realtime — real-time data channel
postgres_client.py utility module handles all JSONB read/write operations, keeping raw SQL out of view logic and making storage behaviour easy to test in isolation.
Frontend
React 18 + Redux Toolkit + Material-UI 7
The SPA is structured around page-level components and a shared component library:
- Pages: Login, Schema builder, Collection content, Document content, Settings
- 13 reusable components including SchemaComponent, DocumentList, PageForm, and LinkDrawer
- 5 Redux slices (schema, collection, document, schemaPreset, rootPath) providing predictable global state
- Custom hooks —
useSchema,useCollection,useDocument,useSchemaMetaData— encapsulate data-fetching logic and keep pages thin
Technical Stack
- Frontend: React 18, TypeScript, Vite, Redux Toolkit, Material-UI 7, React Router 7, Axios, SASS
- Backend: Django 5, Django REST Framework, PostgreSQL (JSONB), JWT auth
Key Takeaways
DashTro demonstrates the power of JSONB-backed dynamic schemas—content structure is data, not code, so new content types require zero backend changes. The pattern of generating serializers and forms at runtime from schema definitions keeps the system genuinely headless: the API contract is driven by user configuration, not hardcoded models.
Schema: User-defined typed fields (String, Number, Boolean, NestedDoc, ReferenceDoc), PascalCase/snake_case validation
Storage: PostgreSQL JSONB — schema-flexible, no migrations for field changes
API: Dynamic DRF serializers generated from schema definitions at request time
Status: Active development
Education & Work Experience
Maharashtra Institute of Technology
Bachelors in Information Technology
Pune, Maharashtra, India
CGPA: 8.47/10
Cybage Software Pvt. Ltd.
Software Developer - Front End Developer
Pune, Maharashtra, India
University of Texas at Arlington
Masters in Computer Science
Dallas-Fort Worth Metroplex, Texas, United States
GPA: 3.83/4
Smart Cookie Rewards Inc.
Software Developer Intern
Remote, USA
Cybage Software Pvt. Ltd.
Software Developer - Front End Developer
December 2020 - July 2023
Pune, Maharashtra, India
Overview
Served as contractor for Google, leading development and maintenance of 8+ high-traffic marketing websites including Google Sustainability, Google Lens, and Google Translate About. Delivered exceptional results in accessibility, performance, and user engagement while working with cutting-edge technologies and modern engineering practices.
Key Achievements
Project Leadership & Development
- Led development and maintenance of 8+ high-traffic Google marketing websites using Python, TypeScript, Flask, and Django
- Ensured WCAG-compliant accessibility standards while enhancing user engagement by 80% and site performance by 90%
- Implemented captivating animations using GSAP library, enhancing user engagement across Google Lens and Google Sustainability sites
- Collaborated with cross-functional teams to deliver UX-focused features following modern engineering practices
Technical Excellence & DevOps
- Developed full-stack solutions combining Angular front-end with NodeJS, Flask, and Django back-end systems
- Drove frontend test coverage through comprehensive unit and integration testing strategies
- Contributed to CI/CD automation on Google Cloud Platform (GCP) for multi-locale builds, reducing deployment time by 40%
- Maintained code quality through comprehensive testing tools, SonarCube code analysis, and modern code review practices
- Managed databases using MySQL, Firebase, and MongoDB for optimal data handling
- Implemented responsive web designs using Bootstrap framework for cross-device compatibility
- Streamlined development workflows through shell scripting automation and performance optimization techniques
Training & Development Projects
- Trained 25+ developers on modern JavaScript animation techniques to enhance animation and creative UI experiences
- Built a custom version of the 2048 game using HTML, SCSS, jQuery, JavaScript, TypeScript with NodeJS and Express backend
- Developed comprehensive online food ordering system with Angular frontend, Firebase integration, and NodeJS/Express backend using TypeScript and SCSS
Technical Stack
Frontend: Angular, TypeScript, JavaScript, jQuery, HTML, CSS, SCSS, Bootstrap, GSAP, ScrollMagic
Backend: Python, Flask, Django, NodeJS, Express
Databases: MySQL, MongoDB, Firebase
Cloud & DevOps: Google Cloud Platform (GCP), CI/CD Automation, Multi-locale Builds
Tools: Webpack, Gulp, Shell Scripting, Testing Tools, SonarCube, Unit Testing, Integration Testing
Smart Rewards Inc.
Software Developer Intern (Backend & Automation)
August 2025 - Present
Remote, USA
Overview
Building production-grade AI-powered automation systems at Smart Rewards Inc., specializing in Retrieval-Augmented Generation (RAG), conversational AI, and workflow automation. Designing intelligent systems that integrate advanced LLMs with backend infrastructure to reduce manual processes and enable non-technical users to leverage AI-driven automation at scale.
Key Achievements
AI & LLM Integration
- Architected production-grade RAG system for automated content generation integrating Supabase vector store with n8n workflows, demonstrating expertise in context-based retrieval and LLM orchestration
- Designed conversational AI workflow using Gemini and ChatGPT LLMs via OpenRouter, enabling natural language-driven automation for non-technical users - reducing manual processes by 85%
- Built end-to-end semantic search pipeline leveraging document embeddings, contextual relevance scoring, and LLM-based generation tuned for multi-market localization
- Engineered integration between RAG components, Gmail APIs, and n8n orchestration, reducing manual email creation time by 85%
Backend & Automation Architecture
- Developed AI-powered automation workflows using FastAPI micro-services and containerized deployment via Docker, optimizing process throughput by 60%
- Designed event-driven architecture with comprehensive error handling and retry logic for reliable multi-step automation pipelines
- Implemented RESTful APIs with production-grade error handling and logging for seamless LLM and external service integrations
- Configured NGINX reverse proxy and cloud deployment infrastructure for scalable backend services
Development Practices & Tools
- Utilized AI-assisted development tools (Cursor, Claude Code, and GitHub Copilot) to accelerate workflow automation and backend development while maintaining high code quality and reliability
- Implemented Test-Driven Development (TDD) practices with comprehensive testing and quality assurance
Technical Stack
AI/ML & LLMs: RAG Systems, Gemini, ChatGPT, OpenRouter, Vector Databases, Semantic Search, LLM Orchestration
Backend: FastAPI, Python, Node.js
Automation & Workflows: n8n, Gmail APIs, Webhook Integration
Databases & Storage: Supabase, Vector Stores
Cloud & DevOps: Docker, NGINX, Cloudflare, Cloud Deployment
Development Tools: Cursor, Claude Code, GitHub Copilot, Git
Skills
Frontend
UI & Database
Backend
AI / ML
Javascript
Typescript
HTML & CSS
Angular
React
Next.js
Docker
Webpack
GSAP
SCSS
Material UI
Figma
Firebase
MongoDB
MySQL
NodeJs
Django
Flask
FastAPI
GraphQL
Go
Github
Python
Tensorflow
Keras
HuggingFace
OpenCV
Streamlit
PyTorch
Testimonials
I truly appreciate your exceptional problem-solving skills. Your work on the Google Translate About project was outstanding, demonstrating your ability to foster effective teamwork and communication.
Your expertise in GSAP animation is impressive, and the quality of your work consistently reflects your dedication and pride. Keep up the excellent work, and all the best for your future endeavors!
Pratik Bhasar
Senior Software Developer
Your creativity and innovative approach to solving problems have significantly contributed to the success of our project. I've seen you grow tremendously you've been working with us. Everyone on your team respects you for being a kind, helpful, and skilled individual.
Thank you for taking on the leadership of the new functionality in such short notice. Your proactive attitude clearly shows that you are a great leader in the making.
Apoorva Sapate
Senior Software Developer
Contact me
Get in Touch
I'm a dedicated software developer ready to turn innovative ideas into reality. Whether you have a project in mind or just want to connect, I'm eager to hear from you.
Reach out via the form or connect on social media. Let's create impactful software solutions together!
- [email protected]
- +1 (682) 340-7219
- Dallas, Texas, United States
Lets Talk
Message Sent
- [email protected]
- +1 (682) 340-7219
- Dallas, Texas, United States