Artificial intelligence (AI) seems to be front and center in every aspect of life these days. Wherever you go, whatever you read, AI is a trending topic. Irrespective of industry, separate from skill set and experience, AI is creeping into the commonplace and dominating the workplace. When it comes to tech (not surprisingly), AI is changing the way people work and the way entire job functions operate. If you don’t embrace it, you’ll be the next dinosaur.
Software developers, who are historically keen to embrace new technologies, have jumped headlong into the AI evolution. From coding copilots to generative content creation, AI is rapidly shaping how developers build, test, and deploy software. However, as AI accelerates development and fosters innovation, it also introduces aspects of risk that, if not handled correctly, could exponentially expand the attack surface and pose a threat to society at large. If you think that’s an extreme exaggeration, you might be at odds with the European Union’s 2024 binding regulation for artificial intelligence: the EU AI Act.
Regarded as the world’s first and most comprehensive regulatory framework for AI, the Act was written to establish safety guardrails for how AI is built and used, with a focus on personal and societal risk. Individuals who are opposed to regulation may feel that this law is overly restrictive and impedes progress. Nevertheless, with this regulation, one thing is clear: change is coming. Software developers and AppSec teams had best be ready.
Why the EU AI Act Matters for Developers
The Act is primarily designed to govern AI systems that impact people’s rights and safety (think: biometric identification, education algorithms, and recruitment tools). However, its ripple effects extend across the software development lifecycle and to a plethora of software types (e.g., HR systems, productivity tools, critical infrastructure, healthcare, etc.).
Any organization building, integrating, or distributing AI-powered applications in the EU will need to comply with new requirements for security, transparency, traceability, and risk management.
Importantly, the AI Act carries significant implications beyond the European continent; the legislation explicitly asserts an “extraterritorial reach,” meaning it applies to non-EU companies if their AI products’ output is sold or used within the EU or European Economic Area (EEA). This means that most companies producing AI software will be subject to this regulation, whether they’re located in the EU/EEA or just doing business with those that are.
With the AI Act set to take full effect this summer (August 2, 2025), DevOps and AppSec teams have a long list of requirements to meet. In the best-case scenario, organizations started preparing as soon as the Act was officially passed in 2024. In reality, and pre-dating this law, many security teams already grapple with software supply chain complexity, mounting technical debt, and the pressures of continuous delivery.
High-Risk Code in High-Risk Systems
One of the core features of the Act is its risk-based classification. According to the plan, AI systems fall into four tiers: unacceptable, high-risk, limited-risk, and minimal-risk.
Risk Category | Key Characteristics/Examples | Primary Regulatory Approach/Obligations |
Unacceptable Risk | Social scoring, untargeted facial recognition scraping, emotion recognition in workplace/education, manipulative AI, targeting vulnerable populations. | Banned outright; immediate cessation of use. Limited exceptions for specific law enforcement uses with judicial approval. |
High Risk | Safety components in products (medical devices, cars, aviation), critical infrastructure, education/vocational training, employment/HR, essential public/private services, law enforcement, justice, democratic processes. | Stringent oversight; comprehensive requirements including quality management, risk management, technical documentation, human oversight, data governance, conformity assessment, and post-market monitoring. |
Limited Risk | AI systems intended to interact directly with individuals (chatbots), AI systems generating/modifying content (deepfakes). | Transparency requirements; disclosure of AI interaction, clear labeling of AI-generated content, copyright compliance for generative AI. |
Minimal/No Risk | Email spam filters, streaming service recommendation algorithms, retail cross-sell recommendation systems. | Minimal restrictions; generally unregulated, but adherence to voluntary codes of conduct is encouraged. |
Systems classified as high-risk—those involved in critical infrastructure, with access to public services, or that incorporate user profiling—must meet strict security, logging, and governance requirements. Importantly for developers, this includes underlying software code and how it is created and maintained.
If authentication workflows, dependency chains, and CI/CD security posture were operational and security concerns before the Act, they’re now also compliance mandates. It’s no longer just the security team (or possibly the executive team, if the organization is especially security-forward) breathing down developers’ necks; auditors and regulators are now added to the mix and could impose severe operational and financial penalties if the rules aren’t followed.
Shift Left or Fall Behind
Regulation or not, modern security practices already encourage teams to shift security left—embedding protection earlier in the software development lifecycle (SDLC). But under the AI Act, this becomes more than a best practice. It’s a requirement.
Key Obligations for Providers of High-Risk AI Systems: Design, Development, and Conformity
Understandably, providers/builders of high-risk AI systems/software must meet the most stringent standards.
Design: At the design stage, software providers must implement risk management systems, ensure data governance and quality, and embed robust technical safeguards that align with safety, cybersecurity, and human oversight standards. The emphasis is on building trust from the start so that risks don’t make it through the development lifecycle and into runtime, where the stakes are highest and most costly to fix.
Development: During development, high-risk AI systems must undergo transparent documentation, ongoing testing, and bias monitoring. Crucially, teams must track how the system learns, adapts, and behaves under real-world conditions. It’s no longer enough to ship a product that “works.” It must work consistently, equitably, and accountably.
Technical documentation: Finally, before deployment, providers are required to complete a formal conformity assessment to demonstrate compliance. This assessment confirms compliance with the Act through technical documentation, data quality assessment, risk management, human oversight, and transparency requirements. Providers must test, evaluate, and (on a case-by-case basis) conduct third-party reviews. Further, they are required to report any incidents that could impact safety or fundamental rights.
Key Obligations for Providers of Moderate-Risk AI Systems: Design, Development, and Conformity
While high-risk systems receive the bulk of regulatory attention, moderate-risk AI systems must meet requirements, too. These include transparency requirements, particularly for software with which users interact directly or those systems that generate content. Think: vibing. What’s more, under the AI Act, providers must ensure users understand they’re engaging with an AI system, and in some cases, must clearly disclose when content has been artificially generated or manipulated.
For moderate-risk AI, providers must track and document how models behave, how outputs are generated, and how users are informed. In many cases, DevOps teams already feel AppSec processes impose a significant burden on rapid software development and deployment. The alerts are too plentiful. The findings are often questionable. Developers are already hungry for a better way to build and deploy secure software. Now, the stakes are higher.
This is why Application Security Posture Management (ASPM) platforms are quickly becoming indispensable in organizations that build, test, and deploy software, irrespective of the latest regulations, but certainly now considering them. ASPM platforms can help organizations meet the AI Act’s requirements by tying model logic, training data lineage, and deployment configurations to real-time monitoring—so teams can comply while focusing on delivering secure software.
ASPM: From Risk Reduction to Regulatory Alignment
Many security professionals have contested compliance as a valuable cyber risk strategy, and rightly so. But compliance is also a reality—a growing reality. While it’s common to hear security practitioners preach that “compliance doesn’t equal security,” compliance must be part of security if the organization doesn’t want to run afoul of legal obligations.
In this sense, it’s useful to look at compliance as the foundation of a good security program, as the basis for a stronger, more robust AppSec program. Yes, there may be more checkboxes to check, but the AI Act does a nice job of laying out some security basics.
To help with that endeavour, ASPM can serve as the control plane for AI software risk and compliance. ASPM brings together many crucial capabilities and allows both DevOps and AppSec teams to cut through the noise of traditional and siloed AppSec practices.
A unified ASPM platform provides critical capabilities that translate regulatory pressure into engineering action, offering:
- Unified visibility: ASPM integrates data from various security testing tools (SAST, DAST, SCA, secret scanning) to provide a holistic view of vulnerabilities across the entire application stack—from design to runtime. This includes identifying weaknesses in the AI model, its supporting infrastructure, and the data pipelines feeding it.
- Prioritization of Exploitable Vulnerabilities: ASPM helps organizations move beyond sheer volume of alerts to focus on the most critical, exploitable vulnerabilities. By analyzing context (e.g., internet exposure, sensitive data access), it prioritizes fixes for issues that pose the greatest risk.
- SBOM and supply chain integrity: Up-to-date software bills of materials (SBOMs), detect drift, and third-party component tracking are integral to ensuring both vulnerability management and compliance with the EU’s AI Act. A modern ASPM offers both capabilities in one control plane.
- Configuration management and secure baselines: ASPM helps monitor and enforce secure configurations throughout the SDLC. It can detect misconfigurations in cloud environments, CI/CD pipelines, and other infrastructure components that could expose the software to attack.
- Automated evidence collection: ASPMs can automate the collection of security-related evidence throughout the SDLC, including vulnerability scan results, policy enforcement records, and remediation progress. This can feed into the comprehensive technical documentation required by the Act.
- Audit Trails: Through detailed logs of security activities and changes, ASPM contributes to the traceability and accountability requirements of the Act, allowing organizations to reconstruct events in case of an incident or for compliance audits.
- Security posture attestation: For providers of AI-powered software, the comprehensive insights and evidence provided by a leading ASPM like OX Security can be used as part of the AI Act’s conformity assessment to demonstrate that the AI system meets cybersecurity and robustness requirements.
The Wrap Up
The EU AI Act is a major regulatory milestone that will impact companies far beyond the EU’s borders. It is also a hard push in the direction of greater software security rigor. As newly-developed AI floods application development—from code generation to threat detection and response—organizations building and using AI systems and AI-powered software must be aware that AI in its current state is vulnerable and fallible. This means it requires identification, understanding, and oversight.
Even before AI was part of the software and vulnerability assessment equation, organizations had a responsibility to build with transparency, document with precision, and prioritize security at every stage of the development lifecycle. Yet, these processes continue to challenge teams, especially those using multiple, disparate systems to understand and manage software-related risk. The introduction of AI simply complicates the issues further.
However, ASPM platforms like OX Security allow organizations to regain control. ASPM isn’t a magic bullet for the AI Act, but it is indispensable for addressing technical requirements. Through continuous visibility, contextualized risk prioritization, and robust remediation capabilities, ASPM helps organizations build, deploy, and maintain secure and resilient AI systems and AI-powered software that can meet the mandates of the EU AI Act.