The Good, The Bad, and The Ugly
The AI genie is out of the bottle. Here’s how to ensure it supports application security.
The security implications of accelerated software development were clear before artificial intelligence (AI) arrived to put the pedal to the floor. With as much as 40-80% of code in new software projects already coming from third parties, the emergence of “vibe coding” and AI-assisted development are adding a new layer of complexity to securing applications. For AppSec teams, this can mean more code to assess and more dependencies to monitor — all in the same accelerated software development lifecycle (SDLC).
Third-party code can at least be traced. What happens when code is AI-generated, effectively originating in a black box?
For AppSec teams, the challenge of understanding software risk isn’t just about keeping pace with development; it’s about smarter decision-making, supported by automation, AI, and machine learning (ML). To do that, it helps security and development teams to have a realistic understanding of the benefits and risks of AI-generated software.
Augmentation vs. Automation
As we’ve discussed elsewhere, the sky isn’t always blue in the world of AI-generated code and tools. But with the genie already out of the bottle, it’s time to take a more strategic approach to software development that incorporates AI. AI-generated code can create security risk, but AI tools are also increasingly part of the solution, bringing additional speed, insights, and context to human capabilities and — perhaps most importantly — the ability to reliably perform the heavy lifting on mundane, time-consuming tasks so the humans can tackle the issue of remediating the 5% of issues that really matter.
To augment human expertise, AI has to be built on human expertise. That’s where the real innovation happens. As Miles Davis said, “First, you have to learn the rules of the game. Then you have to play better than anyone else.”
Here are some ways that smart use of AI is helping AppSec teams to keep the security guardrails up without impeding productivity and innovation.
Security at the Speed of Development
Accelerated software development lifecycles (SDLC) introduce cyber risk, but AI and automation tools are helping AppSec teams evolve to meet developers where they are — without compromising security. Get the balance between development speed and security right, and security teams can make a meaningful shift from reactive scanning to proactive risk reduction and vulnerability prevention.
Where traditional vulnerability assessment approaches struggle to keep pace with modern software development, automation and AI tools can accelerate and enhance security decision-making, helping shift focus from detecting threats to preventing them.
Some of the key areas in which AI and automation are transforming AppSec are worth a closer look:
Automated code review: By integrating and automating tools like Static Application Security Testing (SAST) and Software Composition Analysis (SCA) into CI/CD pipelines, security issues can be addressed and remediated early in development. AI takes this a step further, improving the accuracy and relevance of suggested code changes and real-time feedback.
Threat pattern recognition: AI and machine learning (ML) models come into their own when it comes to analyzing massive volumes of security data from multiple sources. From threat feeds to exploit databases and dark web chatter, AI and ML can uncover patterns that traditional, siloed tools can miss. Whether you’re watching for polymorphic attacks, behavioral anomalies, or mapping exposures to known adversary techniques such as MITRE, AI and ML give AppSec teams more targeted insights.
Risk-based prioritization: The same capacity for data analysis that drives pattern recognition also allows AppSec teams to identify the software vulnerabilities that are actually exploitable in their specific environment and prioritize remediation accordingly. Rather than relying exclusively on CVSS (Common Vulnerability Scoring System) to try and understand whether a vulnerability poses a real risk to a specific organization under specific conditions, AI can infer relationships between systems, assets, and exposures in ways that would be challenging to map manually. This cross-referencing allows teams to expand vulnerability identification capability beyond static scoring, with AI facilitating dynamic risk calculation based on emerging and real-time trends.
Reducing alert fatigue and noise: Automation can significantly reduce the burden of false positives, allowing teams to filter out known, accepted, or irrelevant issues automatically. Automated normalization, deduplication, and correlation of data across tools such as SAST, SCA, IaC, and more further reduces the noise. AI rounds it all out, taking on triage and enabling teams to focus on the most relevant threats.
The benefits of AI/ML and automation are clear, but as discussed at the beginning of this post, there are caveats….
Keeping it Real
The concept of “Garbage in, garbage out” holds as true for AI as it does for any data-related tech. AI, ML, and automation are only as good as the data they’re trained on — and not all data sources are created equal. That last point is something we’ll explore in more detail in a future post, but when it comes to AppSec, data quality is absolutely foundational. This is not a time for blind trust; the tasks you hand over to AI might often be quotidian, but the data you base decisions on can make or break your AI strategy.
AI systems pulling from questionable or opaque data sources can make confident decisions based on flawed assumptions — potentially disastrous for application security. Bad data can amplify security risks with profound impacts, as researchers at New York University discovered with large language models (LLMs) for oncology data: 0.001% of poisoned or corrupted data could misdiagnose cancer in 1/1000 cases.
This butterfly effect can also be seen in the impact of low-quality data on AI systems for application security:
- Inaccurate security assessments: Train your models on unbalanced datasets – e.g., overrepresentation of injection flaws, with low representation of business logic — and prepare for AI-powered blind spots and badly skewed security assessments.
- Limited breadth of insight: Useful as they are, overreliance on public vulnerability databases (e.g., NVD, CVE, CISA KEV, VulnCheck Docs, GitHub Security Advisories) means your AI can only work from known, reported issues, not necessarily the most relevant or emerging ones. That includes zero-day or business-specific ones that have yet to surface.
- Flawed or poisoned data: Deliberate — “poisoned” – or otherwise, inaccurate, irrelevant, or maliciously manipulated data in your AI model “teach” your system all the wrong lessons, including trusting malicious code or ignoring critical threats and patterns.
As you might expect, automation comes with similar caveats and potential pitfalls.
Co-pilot, Not Auto-pilot
As with AI, automation can transform application security, reducing the scope for human error and giving human experts the space and time to focus on the 5% of threats that would have an impact on your organization if successfully exploited. If you don’t plan properly, automation can create as many headaches as it solves:
Toxic positivity: Automation that isn’t fine-tuned to the unique needs of your environment risks and business needs overwhelms teams with irrelevant or false positive alerts. While AppSec teams are drawing in a tsunami of alerts, there’s a real danger of missing genuine risks.
False security: Pulling data from low-quality or limited sets can fuel a false sense of security. Systems trained on limited data can miss novel or emerging threats because they haven’t seen them before.
Context blindness: Many automated remediation tools lack the contextual awareness needed to ensure that irrelevant threats aren’t escalated. For example, a threat might be scored as “critical” in the NVD but has low-to-no impact on your specific systems — contextual awareness and evidence-based data prevent these issues from being escalated and diverting resources from threats that pose a real threat to your environment.
Finally, it’s worth remembering how the experts on your AppSec teams achieved that status: They learned by doing. Automation is an excellent way to take the strain of mundane tasks such as patching, but should not come at the expense of de-skilling. After all, who is going to detect and mitigate the curveballs your AI misses (or introduces)?
Think Strategy, Not Shortcut
As SDLCs accelerate, AI and automation can be powerful tools for application security practitioners and DevOps teams. No one wants to be the bottleneck in the development lifecycle, and, as we’ve seen, new techniques can transform our ability to bake security into software development processes from the earliest stages. However, it’s important not to lose sight of the importance of fundamentals:
AI can be a powerful accelerator, but in AppSec, there’s no magic wand. Security outcomes still depend on fundamentals: Code scanning, secure SDLC integration, clear threat models, and developer education. Organizations hoping to automate their way out of technical debt or backlogs run the risk of over-reliance or creating a false sense of “job done.”
We started with Miles Davis, so let’s give the last word to Brad Mehldau: “True originality is rooted in what has gone before.” Understand that AI and automation are tools for augmenting excellence, not cutting corners, and you should be able to keep things on the right track.
OX Security’s Approach to AI and Automation
OX’s unified ASPM platform places automation and AI at the heart of application security, streamlining processes and surfacing the 5% of risks that impact your organization. What sets OX apart is the quality of our data and the structure of our data fabric, giving security teams intelligence that is both relevant and truly actionable. OX pulls from a rich, unified dataset spanning the entire software lifecycle — including source code, pipelines, production telemetry, and more, as well as the full breadth of vulnerability assessment tools (including 10 OX-native scanners), which gives our AI models meaningful context and accurate results.
To learn more about how OX’s AI-driven contextual analysis actually understands software risk, cutting through alert noise to give organizations insight based on real-world exploitability, reachability, and business impact >>book a demo today.