I grew up in Vadodara, Gujarat — a city where centuries-old stepwells sit next to software parks and where my grandfather’s Vedic recitation practice first taught me that precision in language is precision in thought. That idea stuck. It followed me through a degree in engineering, a decade and a half of technology leadership across enterprises, and eventually into the work I do today: building AI systems that treat language not as a feature, but as architecture.
Today I lead GlobalVeda, where I design agentic AI pipelines — systems that perceive context, reason over it, and produce richly expressive output. My flagship project, Suksham Vachak, is a cricket commentary engine that orchestrates LLMs, text-to-speech providers, statistical models, and RAG retrieval to generate persona-driven audio from raw ball-by-ball data. It’s the kind of problem that demands both engineering rigour and a feel for narrative — how a pause after a wicket changes the story of an over.
I’m equally obsessive about the infrastructure underneath. I self-host 12+ Docker services on a Raspberry Pi 5 — Caddy with automated ACME TLS, Cloudflare Tunnel for zero-port ingress, step-ca for private PKI, Home Assistant, Immich, Navidrome, Pi-hole — all version-controlled and documented in a Google Architecture Center–style CoE with C4 diagrams rendered from D2 source code. For me, documentation isn’t an afterthought; if a decision isn’t recorded as an ADR or a diagram, it wasn’t actually made.
Whether it’s fine-tuning a TTS voice for Hindi cricket jargon or writing a Caddyfile that handles five subdomains, the thread is the same: build clearly, document ruthlessly, ship often.
Core technologies:
- Python
- FastAPI
- LLM Orchestration (Claude, Ollama)
- Docker & Compose
- Hugo / MkDocs Material
- D2 Diagrams-as-Code
- Caddy / Nginx
- Cloudflare
- Home Assistant
- MongoDB / SQLite / ChromaDB