Program 02 — The AI Engineering Series

Master Agentic AI

Build, Deploy, and Operate Production AI Systems

0

Modules

0

Cloud Platforms

0

Weeks

0

Subsystems

Download Brochure

The AI Platform Engineering Gap

Everyone is using AI tools. But who builds the platforms that power them? RAG pipelines, agentic systems, fine-tuned models, guardrails, evaluation frameworks, and multi-cloud deployments require a different skill set — one the market is only beginning to develop. Most engineers can call an API. Few can architect and operate the full AI engineering stack.

"The real skill gap isn't using AI tools. It's building and running the AI platforms that power them."

3x

Industry data shows AI/ML platform engineering roles have grown 3x since 2023, with demand outpacing the supply of engineers who can deploy, evaluate, and monitor production AI systems at scale.

Build SageDesk — An Enterprise AI Assistant

A production-grade, multi-tenant AI assistant with RAG, agents, fine-tuning, guardrails, evaluation, ML pipelines, and multi-cloud deployment — built incrementally across all nine modules.

10 Systems Built

10 fully integrated subsystems across the AI platform engineering lifecycle

4 Cloud Platforms

Vertex AI, Bedrock, Azure AI Studio, and OpenShift AI — all four deployed

75+ Technologies

LangChain, LlamaIndex, MLflow, Langfuse, Prometheus, vLLM, and more

SageDesk Subsystems You Build

SubsystemWhat It Does
Knowledge IngestionUpload, chunk, embed, and store company documents for retrieval
RAG PipelineRetrieve, rerank, and generate grounded answers with source citations
Agent OrchestrationMulti-agent system — search, action, and escalation agents with tool use
Model RouterRoute simple queries to SLMs, complex to LLMs — optimize cost and quality
Fine-Tuning PipelinePrepare training data, run LoRA/QLoRA, version and promote models
Guardrails EnginePII detection, content filtering, hallucination detection
Evaluation FrameworkAutomated RAGAS metrics, A/B testing between model versions
ML Analytics PipelineUsage analytics, topic clustering, satisfaction prediction with MLflow
Platform MonitoringLatency tracking, drift detection, auto-scaling, anomaly alerting
Multi-Cloud DeploymentDeploy on Vertex AI, Bedrock, Azure AI Studio, and OpenShift AI

Technologies Used

Python 3.14FastAPIReact 19PostgreSQL 18LangChainLlamaIndexAutoGenMLflowLangfusePrometheus

Who Should Attend

This program is designed for engineers who want to architect, build, and operate production AI platforms — not just consume AI APIs.

🤖

ML Engineers

Move beyond model training — learn to deploy, serve, evaluate, and monitor in production

⚙️

Platform Engineers

Add AI infrastructure to your toolkit — model serving, GPU management, auto-scaling

💻

Backend Developers

Transition into AI engineering — build RAG pipelines, agents, and LLM applications

🏗️

Solutions Architects

Evaluate cloud AI platforms with hands-on deployment experience across all four

🔧

DevOps Engineers

Extend CI/CD to ML models, implement LLMOps and AIOps practices

📊

Engineering Managers

Understand what it takes to build and operate AI platforms — make informed team decisions

Prerequisites: Intermediate Python. Familiarity with SQL, Git, Docker, and command line. Completion of Master AI Tools (Program 01) recommended.

Four Modules. 16 Weeks. One Complete Platform.

Each module extends SageDesk with new capabilities. By the end, you have built a production-grade AI platform from scratch.

Four Cloud Platforms. Complete Deployment.

Every student deploys SageDesk on all four platforms — not just one. You leave with first-hand experience of each and the ability to make informed platform recommendations.

GCP Vertex AI

Managed model endpoints, Vertex AI Search, Vector Search, Cloud Monitoring

AWS Bedrock

Serverless model access, Knowledge Bases, Bedrock Agents, Bedrock Guardrails

Azure AI Studio

Azure OpenAI, AI Search, Content Safety, Prompt Flow

OpenShift AI

Self-managed model serving (KServe/vLLM), Kubernetes-native, full control

Module 5 dedicates 2 weeks to each platform. You understand the trade-offs between managed serverless (Bedrock), fully managed (Vertex AI, Azure AI Studio), and self-managed Kubernetes-native (OpenShift AI).

What Makes This Program Different

01
01

All Four Cloud Platforms, Not One

Most programs teach one cloud. You deploy on all four and understand the trade-offs — managed vs. self-hosted, serverless vs. Kubernetes, proprietary vs. open-source.

02
02

End-to-End, Not Just Models

From document ingestion to production monitoring. RAG, agents, fine-tuning, guardrails, evaluation, MLOps, AIOps — the complete AI platform engineering lifecycle.

03
03

One Real Project Throughout

SageDesk grows with each module. Every concept has a concrete, working implementation — not isolated exercises that go nowhere.

04
04

Production Practices from Week 1

Cost tracking, tenant isolation, model versioning, guardrails, evaluation metrics, drift detection — production concerns are woven into every module, not bolted on at the end.

05
05

Open-Source First

Local development with Ollama and open-source models. MLflow and Langfuse are self-hosted. No vendor lock-in — you own every piece of the stack.

Learning Outcomes

Upon completing this program, participants will be able to:

  • Design and implement RAG pipelines with retrieval, reranking, and grounded generation
  • Build multi-agent systems with tool use, memory, and orchestration
  • Fine-tune open-source models using LoRA/QLoRA and manage model versions
  • Deploy AI applications on Vertex AI, Bedrock, Azure AI Studio, and OpenShift AI
  • Implement guardrails for PII detection, content filtering, and hallucination detection
  • Build evaluation pipelines with RAGAS metrics and A/B testing
  • Track ML experiments with MLflow and implement CI/CD for models
  • Monitor AI systems with latency tracking, drift detection, and auto-scaling
  • Route queries between SLMs and LLMs for cost-optimized inference
  • Make informed cloud AI platform recommendations based on hands-on experience

Program Schedule

16 Weeks
2 Hrs / Week
Sunday Sessions
Format
Instructor-led, hands-on labs
Delivery
Virtual (live, instructor-led)
Duration
16 weeks (one session per week)
Session
Every Sunday, 2 hours
Lab environment
Personal laptop (Ollama for local models) or cloud-based lab instances
Languages
Python 3.14 (backend) · TypeScript 5.8 / React 19 (frontend)
Takeaways
All lab code, deployment configs, and reference guide

Session Time by Timezone

RegionTimezoneSession Time
IndiaIST (UTC+5:30)Sunday 7:00 PM – 9:00 PM
USA (East Coast)EST (UTC-5)Sunday 8:30 AM – 10:30 AM
UK / EuropeGMT (UTC+0)Sunday 1:30 PM – 3:30 PM
UAE / Middle EastGST (UTC+4)Sunday 5:30 PM – 7:30 PM
Singapore / East AsiaSGT (UTC+8)Sunday 9:30 PM – 11:30 PM

Simple, Transparent Pricing

One-time fee. No hidden charges. Full program access from day one.

Detecting your location...

Master Agentic AI

$720

one-time payment

  • 75+ technologies across 9 modules
  • Deploy on all 4 cloud AI platforms
  • Build SageDesk — an enterprise AI assistant
  • Sunday live sessions
  • Certificate on completion
Secure payments powered by RazorPay

Ready to Build AI Platforms?

Join the next cohort — Sunday sessions, open-source tools, one real enterprise AI assistant.

Rathinam Trainers & Consultants Private Limited

sales@rathinamtrainers.com · www.rathinamtrainers.com

Local development uses Ollama and open-source models. MLflow and Langfuse are self-hosted. No vendor lock-in.