AI-powered coding tools are revolutionizing development workflows, but they’re creating security challenges faster than most organizations can adapt. The promise of unprecedented productivity gains is real, but so are the risks of security vulnerabilities, technical debt, and developers who can generate code they can’t maintain.
Our expert panel brought together Rob van der Veer (Chief AI Officer, Software Improvement Group), Dustin Lehr (Co-founder, Katilyst), and Chris Lindsey (Field CTO, OX Security) to share hard-learned lessons from the front lines of AI-assisted development. Here’s what every team needs to know.
For Junior Developers: The Hidden Dangers of AI-Generated Code
If you’re early in your career, AI coding tools might seem like a shortcut to productivity. But our experts revealed a critical warning: AI often generates code that looks correct but contains subtle security flaws that inexperienced developers won’t recognize.
Lindsey shared a telling example: “AI created SQL injection ‘protection’ using regex validation – completely wrong – instead of proper parameterized queries. A junior developer looking at this may not know that the methods being created are right or wrong.” Our own research on Cursor confirmed this pattern, finding that AI readily generates vulnerable code – from unencrypted payment APIs to servers wide open for XSS attacks – often with minimal or no security warnings.
The recommendation is stark: junior developers should avoid using AI for production code until they can spot these anti-patterns, or ensure all AI-generated code gets senior review. The MIT research backs this up – students using ChatGPT completed tasks faster but couldn’t answer questions about their work afterward, while those using traditional methods actually understood what they built.
For Teams Already Using AI: The Code Review Crisis
If your team is already generating code with AI, you’re likely facing a hidden bottleneck that’s about to get worse. Van der Veer’s research reveals that coding represents only 15-20% of development work, so even dramatic AI speed improvements have limited overall impact. The real problem? Review.
“Review skills – the skills required to review code – are higher than the skills needed to write code,” van der Veer explained. Yet AI is generating far more code that needs reviewing, creating an exponential increase in workload for senior team members.
The compounding issue is review fatigue. “AI mistakes are relatively rare, so if there’s a mistake in 1% of the code, you simply zone out,” he noted. Organizations must resist the temptation to automate review processes and instead invest heavily in maintaining rigorous human oversight.
For Security Leaders: New Attack Vectors and Simple Wins
Security teams face entirely new threats in the AI era. “Slop squatting” represents a particularly insidious attack vector where malicious actors create packages with names that AI commonly hallucinates. “If you end up using code with a hallucinated library link, you’re pointing to malicious code directly,” Lehr warned.
But there’s also good news: simple prompting changes can dramatically improve security outcomes. “I don’t know how many times I’ve asked an AI assistant to produce code, then said, ‘Okay, but can you now make it secure,’ and it just does it,” Lehr shared.
The most effective approach involves creating custom AI assistants preloaded with your organization’s security standards. “Have your entire development workforce use a profile preloaded with security best practices,” Lehr suggested. “This is one of the best ways to take control at mass scale.”
Security teams should also implement mandatory review processes for AI-generated code and develop training programs specifically addressing AI-related vulnerabilities.
For Organizations: Why “Dumbing Down” AI Is a Smart Strategy
AI lacks the domain knowledge and business context that experienced developers bring. “AI only has visibility to what it has access to… AI is not gonna understand the domain knowledge” of complex, multi-system environments, Lindsey explained.
This points to a counter-intuitive strategy: deliberately “dumbing down” AI tools to keep developers actively engaged. “If we don’t do that, then the current senior programmers are going to be the last of their kind,” van der Veer warned.
The most successful approach involves treating AI as augmentation rather than replacement, creating sandbox environments for experimentation, and maintaining investment in developer education and skill development.
The Bottom Line
AI-assisted development isn’t going away, but success requires treating it as a powerful tool that amplifies human expertise rather than replaces it. Organizations that balance AI productivity with security rigor and human oversight will thrive. Those that don’t risk both security breaches and a workforce that can’t maintain the systems they’ve built.
The choice isn’t between AI and traditional development—it’s between thoughtful integration and reckless adoption. Teams that strike this balance will enjoy AI’s productivity gains while avoiding the pitfalls that have already claimed high-profile casualties.