From AI to Production - Secure at Inception: Manoj (Snyk)
Chief Innovation Officer & Chief Marketing Officer @ Snyk
Having raised US$1B+ and with current ARR at US$300M+, Snyk is a global leader in DevSecOps.
Chief Innovation Officer & Chief Marketing Officer, Manoj Nair, discusses how Snyk is embedding security into AI-coding. He explains how the company balances enterprise trust with innovation speed, and where he sees the developer security stack heading in the next five years.
Q: Let’s start with your role — what do you focus on at Snyk?
Manoj: I run Snyk’s innovation group and marketing team. On the innovation side, think of it like an internal VC incubator - building next-gen products, sometimes from scratch, sometimes through acquisitions, often co-built with strategic partners like Google, Atlassian, and ServiceNow. We also manage our most strategic design partner customers directly.
On the marketing side, I’ve been leading the transformation of Snyk’s voice in the market - and using AI not just in what we say, but in how we run marketing itself.
I’m an engineer by training, an “accidental” product manager, and a former entrepreneur. After selling my startup, I joined Snyk because it felt like a generational company opportunity - one my entire career had prepared me for.
Q: AI coding tools like Cursor, Replit, and GitHub Copilot are everywhere. How does Snyk fit into this world?
Manoj: Snyk was founded on a simple premise: give developers the tools and context to write secure code from the start, rather than relying solely on detection and prevention after deployment. This philosophy helped propel the “shift left” movement - if you’re a fan of it, you can thank us; if not, you can blame us.
Our early use of AI focused on supercharging DevSecOps. We acquired a spin-out from ETH Zurich, one of the world’s top machine learning research hubs, that had built a hybrid AI engine capable of scanning millions of lines of code in under a minute. Legacy tools often took six hours or more to process similar workloads. That leap in speed and accuracy enabled us to integrate security checks seamlessly into live developer workflows, reducing friction and increasing adoption.
When AI coding assistants like GitHub Copilot began gaining traction, enterprise customers wanted to know if Snyk could operate in parallel with these tools, scanning code as it was generated. This evolved into full integrations (e.g. with Gemini Code Assist) where Snyk’s security layer evaluates code before it even reaches the developer, preventing insecure patterns from ever entering the repository.
Today, our aim is what we call “secure at inception.” With estimates suggesting that 50%–95% of all code could be AI-generated in the near future, embedding security context at the moment of creation is no longer optional - it’s essential.
Q: What’s been the biggest challenge in building for this AI-assisted development world?
Manoj: Speed is critical. AI is evolving so quickly that keeping pace isn’t enough. We need to stay ahead. That’s why we’ve invested heavily in talent, hiring and acquiring top AI security researchers, such as those from our Invariant Labs acquisition. These PhDs have uncovered vulnerabilities in major platforms like GitHub’s MCP server and contributed tools to the open-source security community.
We pair this deep research capability with agile engineering squads that can quickly turn findings into real-world protections. These teams work on everything from scanning MCP servers for malicious components, to building detection for AI model and agent risks, to hardening security in an “AI-native” development lifecycle where code generation, design, and inference are tightly integrated.
Q: How do you balance innovation speed with enterprise requirements like compliance and trust?
Manoj: Many large enterprises want to move fast, but with trusted vendors. Our platform advantage is that we’re already deployed in thousands of environments, so customers don’t have to re-integrate identity, security scanning, or compliance frameworks.
We use feature flags and public “labs” experiments so customers can try new AI security capabilities in controlled environments. They get the speed of innovation without losing the compliance and governance they need.
Q: Looking ahead 3–5 years, how do you see the developer security stack evolving with AI agents embedded in workflows?
Manoj: I think there will be four pillars:
Build-time AI security - Guardrails around all usage of models and agents, independent of the model providers. You don’t want the fox guarding the henhouse.
Identity & authorization - Evolving to handle fine-grained, ephemeral permissions for AI agents as well as humans.
Runtime “safety net” - A consolidation of CNAP, EDR, NDR, and XDR into a unified real-time safety fabric.
Governance, risk, and compliance - A horizontal layer ensuring everything meets security and regulatory requirements.
As Marc Andreessen once said, every company became a software company. Now, every company will be an AI company — and AI is eating software. Securing that shift will be one of the most important challenges of our time.


