When vendor prioritization data is broken, developers and AppSec teams pay the price.
AppSec teams are drowning in vulnerabilities and starved for context. Accelerated release cycles, extensive use of third-party and open-source components, and rapidly expanding attack surfaces…We’ve talked a lot on this previous blog about how we got here. When it comes to how we get out of trouble, you can throw more vulnerability data at the problem, but what you really need to know is which vulnerabilities will get you breached first.
For all the advancements in tools and frameworks (and let’s be fair, there have been some great innovations), the fundamental challenge remains: Separating the most relevant threats from the noise. To make things harder, what counts as noise for one organization is a signal on another tech stack. “Critical” is relative: A score of “10” for a vulnerability in a discrete system isn’t the five-alarm fire that a lower-scoring-yet-actively-exploited vulnerability in a widely used application could be.
Case in point: Vulnerabilities like memory buffer errors and improper input validation are under-represented in MITRE’s Top 25, but they dominate real-world ransomware attack patterns.
There’s a world of difference between severity and risk. And that’s where vendor data really starts to matter.
How Security Vendors Score Risk — and Why it Matters
At the baseline level, generic prioritization such as the Common Vulnerability Scoring System (CVSS), Exploit Prediction Scoring System (EPSS), and Known Exploited Vulnerabilities (KEV) are just that: Baseline. They lack context, rely on severity over risk, offer little differentiation between high scores, are static and slow to update, lack chaining awareness, and are open to misinterpretation gaming.
Where does that get you? Teams already under strain are tasked with addressing alerts that may not be relevant or critical. Misalignment leads to frustration, wasted productivity and, in true “cry wolf” fashion, alert fatigue. That last issue can have significant consequences, as the Target data breach (which wiped 46% off profits, and caused an estimated $200m in card replacement costs) underlined.
All this before we consider the fact that different vendors often assign varying priority levels to the same vulnerability, leading to confusion and inconsistent remediation efforts.
To solve this, many AppSec teams enrich their programs with vendor-supplied prioritization data that (theoretically) reflects real-world needs: Adversarial intent, chaining analysis, asset context, and actual business impact. It sounds great, but how do you know what “good” looks like?
Vulnerability Risk Scoring for the Real World: What Does “Good” Look Like?
While legacy scoring models homed in on strategies like counting vulnerabilities, prioritizing based on generic severity labels (like CVSS ), and behaving as if every environment was pretty much the same, modern approaches take a far more risk-based approach. After all, risk management and reduction are what the board, the CISO, and developer teams all want.
So what does “good” risk scoring look like? Your data should include:
Reachability: Can this vulnerability be triggered in your application? Reachability is a prerequisite for exploitability. Many vulnerabilities are nearly impossible to reach by attackers – this difference between theory and threat is important for prioritization and noise reduction. It’s also important from a reporting perspective: When it comes to translating security into terms the business can understand, frame things as “We had 700 vulnerabilities, but only five were reachable in our distinct, and we fixed all five.”
Exploitability in your stack: Not all reachable vulnerabilities are exploitable. Where reachability data tells you “This code is used in our app,” exploitability data lets teams know that an attacker could control inputs and access code, and that mitigations to prevent an exploit do not yet exist. Exploitability data helps security teams to further refine vulnerability remediation priorities.
Adversarial context: This is where cyber risk prioritization gets real. The vulnerability is reachable, it’s exploitable, but is it in the attacker playbook? Data that provides adversarial context helps AppSec teams to establish whether or not an attack path aligns with their specific environment and deployed cybersecurity controls. Understanding these conditions enables AppSec teams to prioritize by:
- Mapping issues to known techniques or exploit chains
- Understanding how attackers chain vulnerabilities
- Establishing whether the exploits are being discussed on dark web forums
Business impact: “If this vulnerability were exploited, how would that impact the business?” For example, a low-scoring vulnerability in a customer-facing payments application could carry far greater business impact than a high-scoring one on a single server that is air gapped. Good risk scoring considers the business criticality of an asset, along with its impact on regulated data such as HIPAA or GDPR, and takes into account the importance of user trust and brand image.
From a big picture perspective, “good” moves AppSec away from checkboxes and into risk reduction. Of course, all of that assumes that the sources used to feed insights are current, reliable, and accurate. Many organizations place a lot of trust in vendor data, but their knowledge of what’s going on under the hood is limited. Why does that matter?
All Data is Not Created Equal
Every security vendor has their own approach to data analysis, but as we’ve seen, it’s not just about how many vulnerabilities are detected — how, by whom, and the context in which they’re scored and ranked all have significant implications for security teams. But the source of vendor data is often overlooked. Some of the ways cybersecurity vendors gather and curate data include:
- Public National Vulnerability Database (NVD) feeds: The NVD is already under strain from a backlog, meaning data is often delayed or incomplete.
- Threat intelligence, exploit telemetry, in-house research: Many vendors complement data from the NVD with their own sources of information.
- Open-source commit histories, package manager advisories, bug bounty disclosures: Some vendors ingest additional public disclosure sources that are not included in the NVD.
- Attack path modeling, SBOM correlation, exploit validation: The most effective solutions include data that allows prioritization engines to reflect real attack behavior, as opposed to raw metadata.
Why does this all matter? If your vendor relies solely on NVD or CVE feeds, you’re not getting the full picture of your cyber risk posture — you’re missing zero-days, emerging threat patterns, and exploit chains that are actively used by attackers.
How to Evaluate Vendor Prioritization Data
All of this brings us to the ultimate question: If AppSec teams are increasingly dependent on third-party data for vulnerability management, threat intelligence, and prioritization, you’d better be sure about the quality of that data.
Prioritization that isn’t based on real-world exploitability, contextual visibility, and actual attacker behaviour is essentially just another endless list of alerts.
How can you evaluate alerts to find the risks that matter to your business, impacting your security risk posture and preventing you from managing cyber risk? Here are some things to look out for, or ask your vendor to clarify:
- Where’s it from? Is the data up-to-date, comprehensive, and credible?
Ask the vendor:
- Do you rely solely on public data (e.g., NVD, CVE)? Or do you pull from other sources, such as threat intelligence and private research?
- How do you ingest, normalize and update the data?
- Do you enrich CVEs with context from real-world exploit data (such as CISA KEV, ExploitDB)?
- Do you flag vulnerabilities not yet in the NVD?
What you want to hear: Frequent data refreshes, validated research, and the ability to flag emerging threat signals beyond the NVD.
- How is it scored? What differentiates a critical vulnerability from a low-priority one?
Ask the vendor:
- How do you determine priority beyond a CVSS score?
- How do you factor in exploitability in my stack?
- Are active exploitation signals included in scoring?
- Can I customize prioritization based on risk tolerance or business logix?
What you want to hear: Transparent scoring logic that includes reachability, exploit maturity, and attack chaining.
- What’s the context? Will the scoring reflect my organization’s real risk, not someone else’s?
Ask the vendor:
- How do you determine if a vulnerability is reachable or exploitable in my running app?
- Can you link vulnerabilities to CI/CD pipelines or containers?
- Can you prioritize based on SBOM?
- Do you support attack path analysis?
What you want to hear: Integration with CI/CD pipeline, runtime context, repo ownership mapping, and developer workflow integration.
- Who owns the fix? Will this actually help my team to take action or will it just create more noise?
Ask the vendor:
- Can you identify which team or repo is responsible for fixing this vulnerability?
- Can you integrate with ticketing systems or issue trackers such as Jira and GitHub?
- Do you provide fix suggestions or validated remediation steps?
What you want to hear: Integration with software development tools, ability to trace ownership, and capability to fix context – not just issue problem statements.
These are just overview questions, but the point stands: Not all risk prioritization data is created equal, and not all vendors are equally equipped to provide the kind of context-aware, attack-informed insights today’s AppSec teams need. Generic risk scoring isn’t just inefficient, it’s risky — that’s why it’s so important to develop an understanding of the source, structure, and strategic value of that data you use. It’s not a nice to have, it’s a foundation stone of an AppSec program that actually drives risk reduction, as opposed to simply documenting it.
When You Can’t Fix Every Vulnerability, Get Your Priorities Right
OX Security’s Unified AppSec platform helps AppSec and DevOps teams fix the 5% of vulnerabilities that matter most, based on data that actually understands and aligns with your organization’s risk.
OX’s integrated approach includes attack path analysis with reachability assessment and business impact evaluation, enabling organizations to prioritize vulnerabilities that pose genuine risk. Trusted, enriched data sources give visibility far beyond public feeds, while mapping issues to SBOMs and commit history provide deep visibility for earlier, more accurate, more comprehensive risk detection.
Move from alert fatigue to action when you focus on the most critical vulnerabilities for your organization.
Book a Demo today.