In application security (AppSec), visibility is often heralded as the primary pillar of a robust cyber strategy. The prevailing belief is that, by illuminating every inch of the application development process, security teams can systematically mitigate potential threats. However, this emphasis on visibility, while well-intentioned, may be overshadowing more pressing problems—ones that have farther-reaching business consequences than the simple answer to, “What’s in our ecosystem?”
The Illusion of Comprehensive Visibility
Imagine standing atop a mountain with a panoramic view that stretches for as far as the eye can see. The vastness is awe-inspiring, but without a map or compass, the details blur into obscurity. Similarly, in AppSec, an unfiltered view of all operations can lead to information overload. Security tools inundate teams with a deluge of vulnerabilities and alerts, many of which are false positives or trivial concerns. This “noise” obscures the signals that truly matter, diverting attention and resources away from genuine threats.
The Mirage of Security Control
Organizations that build and deploy software use numerous tools to guide security and development teams throughout the software development process. Generally speaking, teams’ tech stacks include visibility tools that can create a comforting illusion of control. However, the ability to see a problem is not synonymous with solving it. And the reality for most AppSec, DevOps, and DevSecOps teams is that visibility is one—perhaps just the first—issue.
The goal of application security isn’t to see everything; visibility is merely the conduit by which teams understand and manage all components in the software development lifecycle (SDLC) meaningfully and effectively. In other words, without actionable insights and clear remediation paths, security and development teams remain vulnerable, their defenses as porous as ever.
The Siloed Landscape of AppSec Tools
Most organizations’ security arsenals are a patchwork of specialized tools—SAST, SCA, IaC, secrets scanning, and more. Each tool offers a piece of the puzzle, but without integration, aggregation, normalization, and correlation, the full picture remains elusive. This is true from both a “full lifecycle” point of view as well as an evidential point of view.
Vulnerability findings and uncontextualized alerts offer little in the way of actionable insights and/or recommendations about actual risks to the specific organization. After all, no two organizations’ infrastructures, processes, or priorities are the same. Which means that the mechanisms by which risk is assessed cannot be generic. Yet, many commercial tools apply a generic formula (often based solely on CVSS or CISA KEV) and security teams are left with the manual burden of determining what is and what is not relevant to their build-test-deploy cycles and the organization, more extensively.
Security Alert Overload
Many AppSec (and security, in general) tools were purposefully designed to unearth as many security issues as possible. The thinking: more data equals more value for the customer. If, for example, a vulnerability scan produced only three findings, users would likely be skeptical and assume the product wasn’t working correctly. “Surely there are more flaws than this!” Regardless of the severity of findings (which is the important part), seeing only three results would be deemed an inadequate result.
Ironically enough, it’s the perceived thoroughness and desire for “comprehensive visibility” that frequently prohibit security teams from homing in on problems that truly pose risk to the organization. When it comes to AppSec and software supply chain security (SSCS), the attack surface is so vast, so dynamic that practitioners are almost disappointed when they don’t see a plethora of results from their vulnerability assessment tools.
Yet, when bombarded with incessant, non-contextual alerts, even the most diligent security professionals can succumb to alert fatigue.
What, then, is the answer to comprehensive visibility? Should application security testing tool providers aim to eliminate some of the identified vulnerabilities? Should providers pare down the results to only the assumed-critical vulnerabilities? Should vulnerabilities listed in the OWASP Top Ten or the CISA KEV automatically be prioritized?
No. Definitively, we cannot throw the baby out with the bathwater.
This conundrum is exactly why AppSec testing tools need to do better—why contextual analysis of vulnerabilities, accompanied by evidence, is of utmost importance. AppSec and DevOps teams need the ability to see everything, but filter for and drill into what’s most important to their software development program and business needs.
Enter: Evidence-based prioritization.
From Visibility to Prioritization
Discerning and addressing vulnerabilities hinges not merely on their identification but on a nuanced understanding of their reachability, exploitability, and the unique business impact they may affect. Prioritization tailored to an organization’s specific context is paramount in fortifying defenses and optimizing resource allocation.
Reachability: Mapping the Pathways of Potential Threats
Evaluating a vulnerability’s reachability requires a determination of the likelihood that an attacker can successfully access the vulnerable component. Factors such as network configurations, access controls, and system architectures play pivotal roles in this estimation. For instance, a vulnerability in an internal application protected by robust firewalls presents a lower risk compared to one exposed to public networks.
An effective AppSec platform should be able to assess these factors and help organizations focus their remediation efforts on vulnerabilities that are more accessible and, consequently, more susceptible to exploitation.
Exploitability: Gauging the Ease of Attack
Exploitability is another element that has a great impact on cyber risk. When it comes to vulnerability exploitability, security tools must be able to analyze the complexity and prerequisites required—not just the presence of a vulnerability.
The Common Vulnerability Scoring System (CVSS) provides one framework to assess factors such as attack vector, complexity, and required privileges. For example, a vulnerability that necessitates high-level privileges and intricate conditions poses a lower immediate threat than one that is remotely exploitable without authentication.
However, while CVSS helps vendors and users assess the parameters for exploitability, it does not always account for real-world scenarios. As such, many AppSec platforms rely on the Cybersecurity and Infrastructure Security Agency (CISA) Known Exploited Vulnerabilities (KEV) Catalog to understand which vulnerabilities are known to have been exploited.
Combining these two assessment mechanisms is a good start, but there’s one more extremely important factor…
Business Impact: Aligning AppSec with Organizational Objectives
Beyond technical assessments, organizations must understand the (unique) potential business impacts of vulnerabilities in their environments. AppSec teams must be able to gauge how an exploited vulnerability could affect operations, revenue streams, regulatory compliance, and reputation. For instance, an exploited vulnerability in a system that holds sensitive customer data (e.g., PII, financial data) may result in legal and reputational repercussions. In contrast, a weakness in a non-essential system (e.g., a marketing database that contains campaign information but not PII) poses minimal risk.
This context allows AppSec and developer teams to deprioritize patching of the lesser vulnerability in favor of triaging higher risk.
Tailoring Prioritization to Individual Organizational Contexts
Every organization operates within a distinct environment, so it is imperative to customize vulnerability prioritization strategies accordingly. Factors such as industry-specific threats, regulatory requirements, and internal risk appetites should inform the prioritization process.
For example, data access in a healthcare setting is essential. Doctors, nurses, and technicians need quick and easy access to records and data to effectively treat patients. As such, authentication must be user-friendly and rigorous—elements often considered at odds in a security context. Therefore, a healthcare organization might prioritize authentication-based vulnerabilities over other types of vulnerabilities, then layer on additional controls (encryption, segmentation, etc.) to ensure health records and sensitive medical data aren’t plagued by initial access exploitation.
This type of tailored prioritization is precisely what AppSec teams need to ensure their vulnerability response is accurate and effective. As such, vendor organizations should build tools that allow customers to focus on environment, context, and evidence. Though this capability might be predicated on holistic visibility (gathering all the data), visibility is the stepping stone, not the pinnacle. Verifiable, evidence-based prioritization is the peak of the assessment process and is the true facilitator of risk management.
Conclusion
While visibility into potential vulnerabilities is foundational, the essence of effective AppSec lies in precise prioritization— prioritization that is based on reachability, exploitability, and (custom) business impact assessments. But not just any analysis will do. AppSec platform vendors must adopt a contextualized, evidence-based approach that allows end users to see and understand their prioritized threats, promoting a proactive remediation strategy.
The goal of AppSec and software security tools, after all, is insight into the areas of greatest risk. You’ll notice that “insight” comes before “risk.” The visibility part is only the start. Technologies that go beyond basic visibility and generic prioritization are the ones that will result in the most effective cyber risk reduction. And isn’t risk reduction what organizations need most?