← Back to Portfolio
Deloitte × Kindo
Updated 2026-04-30 08:07 PDT
Session 1
DONE

Training Program — Session 1 Complete

#1 Priority — Stated as top priority by Arun Perinkolam (CTO/Principal) to 10+ stakeholders
📅
Date: April 8, 2026 (Wednesday) — Completed
🕐
Duration: ~3 hours (7:00–10:00 AM PT / 10:00 AM–1:00 PM ET)
🎓
Lead Trainer: Bryan Vann — live demos + guided exercises
👥
Attendance: 26 of 31 invited joined (first cohort of 75 total engineers)
☁️
Environment: Kindo SaaS training org with demo mode & mock MCP server
📖
LMS: learning.kindo.ai — self-serve portal available to all participants
Session 1 — What Was Delivered
Three-block structure: platform orientation, live product deep-dive, and hands-on breakout exercises.
▲ Block 1 — Platform Orientation (~30 min)

Overview of the Kindo platform — slides + live walkthrough of core concepts

  • Covered: Kindo as a platform for building autonomous agents focused on DevOps and SecOps
  • Security-first architecture with comprehensive audit logging and governance controls
  • Self-managed / on-premise deployment capability (rare for AI agent platforms)
  • Pre-work: participants logged into the training org at app.kindo.ai and reviewed learning.kindo.ai
■ Block 2 — Live Product Demo & Deep-Dive (~60 min)

Bryan Vann walked through the full Kindo product with live demonstrations

  • Agent Builder: Creating chatbot agents, workflow agents, and trigger agents from scratch
  • Integrations: CrowdStrike, Jira, ServiceNow, Splunk — all powered by MCP servers
  • Knowledge Stores: Vector-based (RAG) retrieval for large-scale document access vs. sandbox file system approach
  • Workflow Agents: Single-step vs. multi-step runs, parallel tool execution, structured output
  • Guided Exercise: Firewall Rule Optimizer — building prompt engineering skills with progressive prompt refinement
● Block 3 — Breakout Sessions & Show-and-Tell (~50 min)

4 breakout groups built their own agents and presented back to the class

  • 25-minute breakout sessions with Bryan Vann and Troy rotating through rooms for support
  • 25-minute show-and-tell where each group demonstrated what they built
  • Participants used demo mode and real integrations (Palo Alto, ServiceNow, Google Drive)
  • Wrap-up: structured feedback collection (what worked well, improvements, wish-list)
Breakout Group Results
Each of the 4 groups built and demonstrated their own agent during the hands-on session.

Group 1: Security Detection Rules

Built agents for automated detection rule creation and application security review

What They Built

  • URL detection rule agent for password spray activity during Windows login
  • Automated workflow: generate rule → store to GitHub → create ServiceNow change request
  • Application security review agent using CVE/NVD data for ServiceNow

Key Learning

  • Prompt specificity matters — vague prompts caused timeouts; granular instructions produced strong results
  • Model selection impacts performance — switched from GPT 5.2 (hanging) to Sonnet 4.6 with immediate improvement

Group 2: Threat Intelligence Analysis

Risk scoring and alert prioritization from TLP data

What They Built

  • Threat intelligence analysis agent with risk scoring and alert prioritization
  • Used TLP mock data for data ingestion and analysis
  • Aimed to generate charts and executive summaries from raw threat data

Key Learning

  • Knowledge stores don’t support CSV directly — workaround: convert to TXT or use step-level file input
  • Large file sizes can cause timeouts — trimming data to ~40% resolved the issue

Group 3: Zero Trust Compliance

Firewall rule analysis scored against CISA and NIST Zero Trust frameworks

What They Built

  • Zero trust risk evaluation agent using CISA maturity framework + NIST ZTA
  • Uploaded framework documents as knowledge store files alongside sample firewall rules
  • Connected Palo Alto and ServiceNow integrations for real connector usage

Key Learning

  • Framework-based scoring (1–10 scale) produced granular, actionable output
  • Next step planned: prioritization agent for highest-impact / lowest-effort remediation actions

Group 4: Platform Exploration

Foundational agent building — Google Drive integration and human input patterns

What They Built

  • Google Drive integration agent: list files, read content, create files, delete files
  • Explored human input placeholders for both text and file inputs

Key Learning

  • Learned how to add user-facing input fields to agent workflows
  • Demonstrated that even non-security use cases work seamlessly on the platform
Participant Feedback
Collected during the structured wrap-up Q&A at the end of Session 1.

✅ What Worked Well

Feedback on the most effective parts of the training

  • Easy to follow structure with clear progression from overview to hands-on
  • Being able to log in and follow along in the product during the demo
  • Structured exercises followed by freeform breakout sessions
  • Hands-on lab work was the most valuable portion of the training
  • "I’m much smarter and confident and starting to play around in this, which I think is the intent."

💡 Improvements for Next Session

Actionable feedback to incorporate into Sessions 2–4

  • More time on labs: Get into the hands-on exercises quicker — participants want more build time
  • Pre-training video content: Move the platform overview to an async prerequisite video so live time is maximized for hands-on work
  • Common troubleshooting: Include a section on typical issues and how to resolve them as engineers ramp up
  • Technical depth as optional: Offer deeper technical details (RAG internals, MCP architecture) as supplemental content for those who want it
Key Technical Topics Covered
Major concepts and platform capabilities demonstrated during Session 1.

🔧 MCP Servers & Integrations

All Kindo tools are MCP servers — API, SDK, CLI, and browser automation paths

  • Every tool in the platform is an MCP server running within Kindo
  • Integration paths: API (most common), SDK, CLI, or browser automation (Playwright)
  • Authentication handled securely — credentials never passed directly to the LLM
  • Demonstrated: CrowdStrike, Jira, ServiceNow, Splunk, Google Drive, Palo Alto

🧠 Knowledge Stores & Data Handling

Two techniques for working with large data: vector retrieval (RAG) vs. sandbox file system

  • Knowledge Stores (RAG): Semantic vector retrieval — handles gigabytes of data, pulls only relevant chunks
  • Sandbox: LLM navigates a file system iteratively — best for smaller, structured data
  • CSV limitation noted: knowledge stores don’t support CSV directly; use step-level file input or convert to TXT
  • Agent-to-agent orchestration discussed (alpha feature, also possible today via webhooks)
Next Steps — Scaling to 75 Engineers
3–4 additional sessions planned to train the remaining cohorts, incorporating Session 1 feedback.

Session 1 ✅

April 8, 2026 — Completed
  • First cohort: ~26 attendees trained
  • Full 3-block format delivered
  • Session recorded for async distribution
  • Feedback collected and incorporated

Sessions 2–4

Upcoming — TBD
  • Remaining ~49 engineers across 3+ sessions
  • Pre-training video content for overview (async)
  • More lab time, less lecture
  • Common troubleshooting section added

LMS & Self-Serve

Ongoing
  • learning.kindo.ai — extending docs portal
  • Session recordings available for self-study
  • Training assistant chatbot for ongoing support
  • Video walkthroughs + role-based learning paths
Supplementary Resources
📖

Platform Documentation

Expanded walkthroughs, quickstarts, best practices

docs.kindo.ai →
🤖

Interactive Training Assistant

Kindo-powered chatbot trained on full documentation

Open Agent →
Open Items
Session 1 delivered — April 8, 2026 — 26 attendees, full 3-block format completed
Session recording — recorded and available for distribution to team
Training environment — Kindo SaaS training org with demo mode confirmed working
Pre-training video content — platform overview video to be created for Sessions 2–4 (based on participant feedback)
Sessions 2–4 scheduling — dates TBD for remaining ~49 engineers
Agent cost visibility — credit-to-dollar conversion and per-agent cost tracking coming to Kindo command center (roadmap item raised during training)
Super user names + LinkedIn profiles — still pending from Smriti/Manoj
Stakeholders
Deloitte Side
NameRole
Arun PerinkolamCTO/Principal — executive sponsor, leads Meta Global Ops
Manoj BhaleSuper user list, instance provisioning
Luv ParakhScheduling coordination
Smriti KewlaniUse case details, setup call coordination
Kindo Side
NameRole
Tony WongStrategic lead, Arun relationship
Bryan VannLead trainer — live session delivery