01. Journey
My path started with classical machine learning, training models on structured datasets. While the math was fascinating, I quickly realized that model performance metrics often stayed in notebooks, disconnected from real user value. I wanted to build things that actually worked in production.
I transitioned into Applied AI, shifting my focus from hyperparameter tuning to system integration. I realized that the hardest challenges weren't just in the model architecture, but in the orchestration—making reliable pipelines, handling edge cases, and ensuring latency requirements were met.
Today, I work on GenAI systems and Agentic workflows. I build multi-agent systems where LLMs actually do work—retrieving data, making decisions, and executing tasks via tools. It's less about "chatbots" and more about intelligent automation layers that sit between users and complex databases.
I prefer shipping usable AI services over academic experimentation. If it doesn't solve a problem reliably, it's just a demo.
02. Technical Arsenal
Applied AI
- GenAI Systems
- RAG Architectures
- Agentic AI & Multi-Agent Systems
- MCP & AI Tool-building
Engineering
- Python (FastAPI, Pydantic)
- TypeScript / Node.js
- Next.js (Frontend Delivery)
- API Design & System Integration
Platforms & Deployment
- Docker & Container Apps
- GitHub Actions
- Azure OpenAI / Vertex AI
- Cloudflare Pages
03. How I Build
I use Python for the heavy lifting—AI services, orchestration, and backend logic where data processing speed matters. For user-facing tools, I reach for TypeScript and Next.js because the type safety and ecosystem allow for rapid, reliable delivery.
Infrastructure is just a delivery mechanism to me, not an identity. While I'm comfortable with Docker and containers, I don't obsess over Kubernetes clusters unless the scale demands it. My focus is always on clarity, reliability, and real-world usability. If the system is too complex to maintain, it's already broken.