Full Stack or Full Stop: The Dangerous Gaps in Modern Enterprise Security

During an authorized security engagement, my team and I accessed a client's entire production environment, starting from just a forgotten server login. This "pivot machine" allowed us direct entry to backend systems. 

This and similar cases show that organizations focus on siloed defenses but overlook gaps between them. Attackers follow the path across boundaries, not the barriers themselves. 

Here are two anonymized examples illustrating why connected security is the missing link in most organizations' defenses. 

The Loyalty App 

A retail company hired our team to assess the security of its mobile loyalty application. A previous assessment, conducted 14 months earlier, had returned three medium-severity findings, all related to how the application stored data locally on the device. The client had addressed every finding. Their confidence was high, and the scope was clear: test the mobile app. 

We began, as we always do with Android applications, with the application package itself. Using standard reverse engineering tools, we decompiled the app down to readable source code (which, with Android, you can easily do by using tools available online). Inside the build configuration, hardcoded, commented, and in plain sight, was a staging server address and a live authentication token, a digital key that grants access to the company's backend systems, staging server is a server that is identical to production and it is often used as a “dress rehearsal” before a theatre play, and most often than not, contains exact replicated data as production. Developers often leave commented or hard coded artifacts during development and forget to remove them before release, this was the case in this story. 

From there, we turned to the programming interface through which the app communicates with the company's servers. The interface assigned customers sequential integer identifiers: customer number 1000 was followed by customer number 1001. The server never verified that the requesting user was entitled to see a given record. We could increment the number in each request and retrieve any customer data in the system. This type of flaw, known as an Insecure Direct Object Reference, is the most common vulnerability in web-facing services according to the Open Web Application Security Project (OWASP), the industry body that tracks these issues globally. It requires no sophisticated technique. Only arithmetic. In this case, it was easy to write a program that uses the staging server address and credentials we found, goes through the list of all customers and creates a Microsoft Excel spreadsheet with all customer information from the company, this is how adversaries do when they breach companies and sell their data on the dark web. 

The access tokens used by the application presented the next step. Each token contained a field declaring whether the bearer was a customer or a member of staff, but the verification of that field was happening inside the app on the user's device rather than on the company's server. By modifying the field value and re-submitting the token, we elevated our access from a standard customer account to a staff-level account. The server never challenged the request. 

Staff-level access exposed a privileged administrative endpoint that returned temporary credentials for Amazon Web Services, the cloud platform the company used to store customer data. Those credentials carried unrestricted permissions across the organization's cloud storage and database services. 

The chain was complete. A mobile application had become a key part of the company's entire cloud infrastructure and, through it, provided access to more than 400,000 customer records. 

Here is what matters more than the technical detail. If a separate mobile security team had assessed the app, they would likely have found the hardcoded credential. If a backend security team had reviewed the server, they might have found the authorization flaw. If a cloud security team had audited access permissions, they might have flagged the over-permissioned account. Each team would have found its part. Nobody would have walked the path from one finding to the next. That walk is the attack. The individual findings were of medium severity. The chain was catastrophic. 

Four domains. One path. The engagement had been scoped for the mobile app only. 

The Forgotten Server 

The second case began with an even less promising starting point and went further. 

The client asked us to conduct an assumed breach exercise, designed to answer a specific question: if an attacker had already gained limited access to an internal system, how far could they go? We were given credentials for a standard user account on a server running Ubuntu, a widely used Linux-based operating system, located in the demilitarized zone (DMZ), the buffer network that sits between the public internet and a company's internal systems. The server's stated purpose was administrative. It held no sensitive data, had no elevated permissions, and nothing about it would have flagged it as a priority in any threat assessment. 

We began by cataloging what software was running on the machine. A standard system utility called pkexec, it is installed in virtually every Linux-based operating system, had not been updated. The vulnerability, tracked under the public identifier CVE-2021-4034, part of the industry's standard catalog of known security flaws and nicknamed PwnKit by the researchers at Qualys who discovered it, was disclosed in January 2022. Those same researchers later determined that the flaw had existed in the software for more than 12 years before anyone identified it. On this server, the fix had never been applied. The last maintenance cycle was 18 months ago. 

Once confirmed, exploiting PwnKit is straightforward. Within seconds, we escalated from a standard user account to the highest level of system access available on the machine, allowing us to read every file on the system. 

In user account directories, we found encrypted remote access keys and credentials for an infrastructure automation tool. The keys gave us access to additional internal systems. The permissions were limited, but while cataloging those systems, we found something that changed the scope of the engagement entirely: the server we had been testing was also functioning as an automated build agent, a machine that executes software development workflows on behalf of the engineering team and holds access to the code repositories and secret configuration values those workflows require. 

In this case, the build agent had access to private source code repositories across the entire engineering organization, but the code was stored in hidden directories, that are also known as dot files and dot directories in Unix and Linux world, but in se. 

Inside one of those repositories, we found a file left over that was used for setting up LDAP users in the old system. The file contained 213 hashed password entries belonging to the company's engineering staff, each password protected by cryptographic one-way hashing, a standard method of storing passwords in a form that cannot be read directly but can be tested against guesses. But the weak password hashes are easy to crack using offline cracking tools such as HashCat, which test common password combinations against hashed entries, we recovered 9 underlying passwords in 2 hours. One belonged to an infrastructure administrator with privileged access across production systems. 

From that single credential, we were able to connect to the company's remote access network. From there, we moved through the production environment without obstruction. Our final report to the client documented compromises spanning the operating system, credential stores, the internal network, automated deployment pipelines, the systems that push new software from development into live service, source code repositories, and identity systems. Seven distinct domains, reached from one low-privilege account on a machine nobody remembered to patch. 

The organization's production servers received monthly updates. Their cloud infrastructure was hardened and monitored. Every employee device ran endpoint detection software. Yet this single utility server, abandoned in the buffer zone, harbored a 12-year-old flaw that remained unpatched. That one oversight was enough to put everything at risk. 

Two Engagements, One Pattern 

The technical details of these two cases have almost nothing in common. Different industries, different entry points, different technologies. What they share is the underlying structure: a sequence of low-severity findings that together crossed every boundary the organization had drawn. 

In the first case, the retail company had a mobile security posture in place. It lacked a connected security posture. In the second, the engineering organization had a production security posture. It lacked a connected security posture. No single team was responsible for the gap between the mobile app and the cloud. No single team owned the forgotten server in the buffer zone. In both cases, the gap was the attack. 

This is the pattern I encounter most consistently across engagements. The industry data confirms that it is widespread. 

The Numbers Behind the Gap 

According to research published by Illumio in 2025, 90 percent of organizations have faced attacks involving lateral movement, the technique by which an attacker who has compromised one system moves through the environment to reach higher-value targets. The average enterprise now runs 76 different security tools, from endpoint detection platforms to vulnerability scanners to cloud security systems, according to data compiled by Infosecurity Magazine. Yet the average time to identify and contain a breach remains 241 days, according to Total Assure's analysis of industry incident data. 

Place those numbers against CrowdStrike's finding: an attacker with the right skills can move from initial access to critical infrastructure in just 62 minutes. Defenders take 241 days to detect and stop a breach. Attackers need less than an hour. Time is not on our side. 

The gap is not in the number of tools. It is in the gap between them. Security teams are built around technology domains because IT departments are structured that way. Attackers are not. They follow data, credentials, and trust relationships wherever those connections lead. 

What Full-Stack Security Actually Means 

Full-stack security recognizes that modern attack chains cross layers, and that understanding the connections between systems is as important as understanding any one system in depth. It does not require every practitioner to become an expert in every domain. It requires every practitioner to keep asking one question: what can an attacker reach from here? 

That question changes the way you evaluate findings. Instead of asking what a vulnerability severity score is, you ask whether the credential you just found also works in adjacent systems. Whether cloud access permissions are scoped too broadly. Whether an account has trust relationships with other platforms. Whether passwords stored in a development repository years ago have ever been changed. Whether the test environment shares infrastructure with the live one. 

These are questions about connectivity and trust, not just about individual vulnerabilities. Answering them requires practitioners who pay attention to what sits next to their own area. 

For practitioners, three habits make the difference. First, mapping the full technology footprint before any assessment begins, including internet-facing services, cloud resources, build systems, and identity platforms, rather than working only from what the client's brief names. Second, testing every set of credentials discovered against every adjacent system they might reach, rather than stopping at the boundary of the initial finding.   

Third, building working knowledge of the security model of neighboring infrastructure: not expert-level mastery, but enough to recognize when a finding in one layer has consequences in another. 

For organizations, the changes are structural. Assessments should be scoped around data flows and trust boundaries rather than technology categories. Threat modeling should bring application, cloud, and network security teams into the same room, because the attack chains that cause serious damage span all three simultaneously. Remediation priority should reflect what an attacker can actually reach if a vulnerability is exploited, not just how severe the vulnerability looks on its own. 

It is also worth being direct about technical debt. The most dangerous finding in both engagements I described was not a novel vulnerability. In the first case, it was a credential that a developer hardcoded during testing and never removed. In the second, it was a patch that was never applied because no team felt responsible for applying it. Technical debt is security debt, and it accumulates where nobody is looking. 

The Gap That Tools Cannot Close Alone 

Organizations running dozens of security tools can still miss an attack that crosses three of those tools' coverage areas at the same time. Each tool generates accurate output within its own domain. An event monitoring platform captures network anomalies. Endpoint detection software tracks activity on individual machines. A vulnerability scanner flags the systems it has been directed to scan. But when a compromised account escalates from a development environment into cloud infrastructure and from there into a software build pipeline, the alerts for that attack are spread across systems that were never built to talk to each other. 

Closing that gap requires continuous visibility into how an organization's exposed attack surface changes over time, as new assets appear, patches expire, and credential paths open, rather than periodic snapshots that go out of date the moment they are finished. A penetration test conducted last month found what was exposed last month. An attacker scanning your environment today finds what is there today. 

This is one of many problems that my team at ANIMARUM is working towards solving, and why we built StacIntel (https://stacintel.se). StacIntel is an offensive security platform that performs continuous reconnaissance and security assessment against an organization's external and internal attack surface. It runs its own passive and active scanning, from subdomain enumeration and technology fingerprinting to port scanning and vulnerability discovery, and maps the results against compliance frameworks like NIS2, giving organizations a clear picture of where they actually stand rather than where they think they stand. With our upcoming appliance model, a lightweight agent that can be deployed inside a client's network, in the cloud, or on-prem, StacIntel extends that visibility into environments that cloud-only platforms simply cannot reach, from traditional IT infrastructure to operational technology and industrial control systems. The goal is to give security teams a single platform that identifies what is exposed, assesses what is compliant, and reaches networks that remote scanning alone cannot. 

Full Stack, or Full Stop 

Both cases come down to the same thing. Every team did their job. Every tool generated its output. The gap was not in effort or investment. It was in the question nobody was asking: given what we know about this environment, what can an attacker reach from here, and by which path? 

Attackers do not respect organizational boundaries. They follow the path of least resistance across whatever domains that path crosses. A hardcoded credential in a mobile application becomes a key to a cloud environment. A forgotten server in a network buffer zone becomes the entry point to an entire engineering organization's passwords. In both cases, the individual finding was unremarkable. The connection made it catastrophic. 

The nature of the adversary has not changed. What has changed is the complexity of the environments they move through, and the number of domain boundaries a single attack chain now crosses. Defending against that means thinking the same way: not in domains, but in paths.  

Full stack, or full stop. 

 

Disclosure: The author is the Cybersecurity Director at ANIMARUM, which leads the ANIMARUM’s Cybersecurity efforts, helps develop and operate the StacIntel cyber intelligence platform. All security engagements described in this article were conducted under a formal contract with the client's full authorization.  

About the Author 

Milot Shala is Cybersecurity Director at ANIMARUM. His work spans Ericsson, Scania, TRATON, and Wasabi Technologies LLC. He has more than 25 years of experience across software engineering and offensive security, specializing in red team operations, adversary emulation, and full-stack security assessments across cloud, automotive, IoT, and enterprise infrastructure. ANIMARUM's cyber intelligence platform StacIntel is available for early access at https://stacintel.se

Sources 

1. Illumio (2025). 90 percent of organizations face attacks involving lateral movement across infrastructure domains. 
  https://betanews.com/2025/10/02/90-percent-of-organizations-face-attacks-involving-lateral-movement/ 

2. Total Assure (2025). Average time to detect and contain a cyberattack. 
  https://www.totalassure.com/blog/average-time-to-detect-cyber-attack-2025 

3. CrowdStrike (2024). The rise of cross-domain attacks and the case for unified defense. CrowdStrike Global Threat Report, documenting a 62-minute average breakout time in adversary simulations. 
  https://www.crowdstrike.com/en-us/blog/rise-cross-domain-attacks-demands-unified-defense/ 

4. Infosecurity Magazine. Organizations manage an average of 76 security tools. 
  https://www.infosecurity-magazine.com/news/organizations-76-security-tools/ 

5. Qualys Threat Research Unit (2022). PwnKit: Local Privilege Escalation Vulnerability Discovered in polkit's pkexec (CVE-2021-4034). Public disclosure: Jan. 25, 2022. 
  https://blog.qualys.com/vulnerabilities-threat-research/2022/01/25/pwnkit-local-privilege-escalation-vulnerability-discovered-in-polkits-pkexec-cve-2021-4034 

6. OWASP API Security Top 10 (2023). API1:2023 Broken Object Level Authorization (BOLA/IDOR). 
  https://owasp.org/API-Security/editions/2023/en/0xa1-broken-object-level-authorization/ 

7. MITRE CWE-639: Authorization Bypass Through User-Controlled Key (Insecure Direct Object Reference). 
  https://cwe.mitre.org/data/definitions/639.html 
 


This post has been a cross-post from my post at https://blog.animarum.se/full-stack-or-full-stop/