Career
The full arc
The work did not move in random directions. It kept moving toward environments where the systems mattered more, the constraints got harder, and the standards got higher.
Early data science work
At Rajasri Infotech, I worked on recommendation engines, dynamic pricing models, monitoring, automated retraining, and dashboards tied directly to customer targeting and business outcomes.
What changed here: the habit of tying model work to measurable impact showed up early, including work that drove a 10% revenue increase.
Graduate-school technical depth
Graduate study gave me more technical depth around MLOps, CI/CD, model lifecycle, experimentation, statistics, and practical deployment. It was also the phase where I pushed through a large volume of hands-on ML work and started treating deployment discipline as a first-class concern.
What changed here: this is where the foundation for end-to-end ML systems got built, not just model experimentation.
Production ML at broader scale
At Asset Class Technologies, I moved into wider production deployment and led rollout of 15+ machine learning models across North America, Europe, and Asia, reaching roughly 20 million customers through recommendation, churn, and demand-forecasting systems.
What changed here: model quality alone stopped being enough. Cross-region reliability, standards, and rollout discipline started to matter just as much.
Speech and NLP systems
My work expanded into transformer-based speech-to-text, speaker identification, and real-time production integration. Those systems reached 98% accuracy and cut compute cost by 80% in live call-center analytics settings.
What changed here: once latency and live integration enter the picture, design decisions get much sharper.
Mission-critical enterprise AI
This is the strongest proof layer in my career. It includes spatiotemporal outage analytics, AI-driven infrastructure inspection, synthetic data generation, generative AI knowledge systems, large-scale geospatial analytics, HPC optimization, and enterprise MLOps.
Some of the clearest signals from this phase are 78% better outage-location accuracy, 92% defect-detection accuracy, 70% less manual inspection time, up to $2.8M in projected annual savings from knowledge systems, 35% cloud cost reduction on large analytics platforms, and 90% lower compute cost on HPC workloads.
Research, public proof, and teaching
As the work matured, it became visible through first-author publications at IEEE PES GM, CIGRE Paris, and DistribuTECH, plus AEIC awards, the Charles Steinmetz Top Innovator Award, NVIDIA GTC visibility, and teaching or mentoring work including advanced NLP instruction.
What changed here: the work held up not only in deployment, but also under outside scrutiny.
Founder chapter
Today, I’m building SAVYMINDS as the product chapter of the same operating standards behind the rest of the career. The current work is a cloud-first platform with strong governance and runtime foundations, then focused products in conversation-heavy enterprise workflows where trust and review matter.
What changes now: the work is moving from project and deployment history into a product thesis.