I'm a Cybersecurity Analyst II at 11:11 Systems, where I handle security operations for 50+ enterprise clients while engineering AI security solutions that improve how our team investigates threats.

I don't just study AI security—I build systems that run in production. My approach is shaped by hands-on SOC operations: I've seen what breaks, what analysts trust, and what actually works under operational constraints. This operator's perspective informs every guardrail architecture I design.

My Approach

AI security isn't just about preventing hallucinations—it's about Defense-in-Depth, treating AI as untrusted, and preventing unsafe behaviors. I build multi-layer guardrail systems that combine deterministic validation, semantic analysis, and policy enforcement to ensure AI-augmented tools and agentic systems fail safely in production environments.

Key Principles:

  • Guardrails must be composable and layered
  • Auditability matters more than raw accuracy for security tools
  • Production systems need deterministic safety controls
  • Analyst trust determines adoption

My Journey

I started in SOC operations at 11:11 Systems, conducting incident response, detection engineering, and threat hunting across multi-cloud environments. I automated repetitive analyst tasks using deterministic controls—these were consistent and predictable. But the introduction of non-deterministic systems like AI introduced new vulnerabilities, broke trust, and teams wouldn't use tools that made mistakes.

That's when I shifted focus to AI security: building systems that augment human analysts while maintaining the trust and auditability security operations require. My work now spans the full AI security lifecycle— architecting guardrail systems, conducting adversarial testing against OWASP LLM Top 10, and designing evaluation frameworks with safety controls.

Before 11:11, I built CDIC's first production SOC from scratch, leading 30+ analysts and learning what security looks like when resources are constrained but threats are real.

My Mission

Build AI security systems that enable innovation without compromising safety. Security shouldn't block progress—it should make the safe path the easy path. I architect guardrail systems that let teams deploy AI confidently, knowing their systems will fail safely when the unexpected happens.

Core Expertise

AI Security

  • ▹ Adversarial ML
  • ▹ Model Security & Hardening
  • ▹ Threat Modeling & AI Red Teaming
  • ▹ AI Agents & MCP Orchestration

Cybersecurity

  • ▹ SIEM, SOAR, EDR, XDR, CRS
  • ▹ Incident Response
  • ▹ Threat Intelligence
  • ▹ Cloud & Endpoint Security

Certifications

17+ certifications including:

  • ▹ GIAC: GFACT, GSEC, GCIH, GCIA
  • ▹ CompTIA: Security+, CASP+, Pentest+
  • ▹ CCNA, SSCP, ITIL