Post-incident review: Source map exposure on non-production subdomain
On February 16, 2026, security researchers @vmfunc, @MDLcsgo, and @DziurwaF published a blog post identifying exposed frontend source maps on a non-production subdomain under withpersona-gov.com. Within an hour of learning about the post, we disabled the subdomain, confirmed that no secrets or customer data were exposed, and began a thorough internal review.
This post is our accounting of what happened, what we got wrong, and what we're changing. We want to be transparent not just about the technical details, but about the organizational gaps that allowed this to happen.
What are source maps?
When engineers write frontend code, it's authored in a readable, structured format. Before that code ships to production, it goes through a build process that minifies and bundles it: compressing variable names, removing whitespace, and combining files. The result is smaller and faster to load, but very difficult for a human to read.
Source maps are files that reverse this process. They map the minified production code back to the original authored source, making it possible to debug issues in a readable format. They're a standard development tool, essentially a lookup table that says "line 1, column 4823 in the minified file corresponds to line 47 of components/Form.tsx."
Critically, source maps only pertain to frontend code: the JavaScript, HTML, and CSS that runs in your browser. They have nothing to do with backend systems, databases, APIs, or infrastructure.
What’s the impact?
Frontend code is, by definition, public. If you visit withpersona.com/dashboard or go through any Persona verification flow, your browser downloads our entire frontend codebase. You can open your browser's developer tools right now and inspect it. Source maps make that code easier to read, but they don't reveal anything that isn't already there.
Every feature and capability visible in our frontend code is already documented in our Help Center, described in our API documentation, or visible in the customer dashboard. There are no hidden features, no secret endpoints, and no sensitive credentials embedded in our frontend.
Our security model and source maps
Persona's security architecture is built on a fundamental principle: we do not trust the client. Our backend systems treat every request from a browser or device as potentially adversarial. Authentication, authorization, data validation, fraud detection signals, and all sensitive logic live server-side. No amount of frontend code inspection changes that.
We do employ obfuscation techniques in our frontend code to increase the cost and difficulty of reverse-engineering our fraud detection signals. These techniques are one layer in a defense-in-depth strategy. They raise the bar, but we don't rely on them as a primary security control. There is active internal and external debate about the efficacy of frontend obfuscation: if someone is sophisticated enough to reverse-engineer minified JavaScript to bypass detection, having access to source maps is unlikely to be the bottleneck.
In fact, Persona historically shipped with source maps enabled in production. We moved away from that as part of a broader effort to make casual reverse-engineering less convenient, not because we considered exposed source maps a security vulnerability.
There is no sensitive data in our frontend code. No API keys, no secrets, no credentials, no customer data. This is by design.
A timeline of what happened
| Date | Event |
|---|---|
| February 4, 2026 12:35AM PST | Engineer creates onyx subdomain for compute migration testing |
| February 16, 2026 2:30PM PST | @vmfunc publishes blog post identifying exposed source maps |
| February 16, 2026 5:10PM PST | Persona Security is notified of blog post |
| February 16, 2026 5:34PM PST | Persona disables the onyx subdomain |
| February 16, 2026 7:10PM PST | Direct engagement with @vmfunc begins |
The context: Persona is in the process of pursuing FedRAMP authorization. As part of that effort, we maintain test environments under withpersona-gov.com to validate our infrastructure. This entire domain has never had any federal customers and has zero customer data.
An engineer was setting up a new service to test a more resilient, multi-node compute architecture for this environment. The goal was to improve scalability and reliability. This test deployment ran in a completely separate GCP project, fully isolated from our FedRAMP-validated environment. To move quickly during development, the engineer allowed source maps on the subdomain. Source maps are enormously helpful for debugging during active development. The exposure occurred only on this new test “onyx” subdomain, and not on our FedRAMP validated deployment.
About the name: The subdomain was called onyx, a reference to the Pokémon Onix (a Pokémon made of multiple boulders, fitting for a multi-node architecture). It was an informal codename chosen by the engineer. It had no connection whatsoever to Fivecast ONYX, an unrelated 3rd party commercial product previously used by ICE. We understand this coincidence caused confusion, and we address it further below.
The configuration gap: We did not have a clear, enforced policy on whether source maps must be disabled on non-production environments. Our engineering team generally does not consider exposed source maps to be a meaningful risk since the underlying code is already public. Because this was a non production domain and no clear source map policy, there was no automated scanning or alerting to detect when source maps were externally accessible. The engineer made a reasonable (if ultimately incorrect) judgment call given the ambiguity.
To be explicit, the same frontend code that was visible via source maps on onyx.withpersona-gov.com is functionally identical to the code your browser downloads when you visit our dashboard. The source maps made it more readable, but not more accessible.
What we got wrong
This is the section we think matters most. The technical exposure itself was limited, but the organizational failures that enabled it are worth examining honestly.
1. Unclear and inconsistent security policies
We had an unclear policy on source map exposure for non-production environments. This meant that source maps were not available on our production dashboard at withpersona.com, but were available on onyx.withpersona-gov.com. That inconsistency is a problem regardless of the technical risk level. Security policies need to be clear and uniformly applied, especially across environments that share a similar domain with our compliance-sensitive infrastructure.
2. Over-indexing on engineering risk assessment
Our engineers correctly assessed that source maps don't expose secrets or create a direct path to compromising our systems. Because the information is already in our help center and dashboard, the engineering calculus was: "it isn't really risky."
That assessment was too narrow. We were evaluating this purely through a technical security lens and underweighting how it would be perceived externally. When you handle sensitive identity data, as we do, the standard isn't just "is this secure?" It's also "could this appear insecure?" We failed to ask that second question.
3. Codenames on public-facing infrastructure
Using an informal codename like onyx on a publicly resolvable subdomain was a mistake. It created an unnecessary appearance of a connection to an unrelated program and generated confusion that was entirely avoidable. Internal project names should stay internal. Anything publicly visible needs to be clear and descriptive.
4. Insufficient guardrails on non-production environments
Because our non-production environments never contain customer data, we gave engineers additional latitude in configuring them. That latitude was well-intentioned (it helps teams move quickly) but it lacked the guardrails needed to prevent configurations that, while not technically dangerous, are publicly visible and subject to scrutiny.
What we're doing about it
We are implementing the following changes to strengthen our security controls and compliance practices across all environments:
Establishing a clear source map policy. We now have an explicit, unambiguous policy: source maps must not be publicly accessible on any environment, production or otherwise.
Automated source map blocking. We are deploying WAF rules that automatically strip or reject requests for source map files. Even if a build were to inadvertently include source maps, they would never be served to an external requester.
Hardening non-production environments. We are implementing infrastructure-level guardrails on non-production environments to enforce consistent security baselines, regardless of whether those environments contain data.
Eliminating public-facing codenames. Going forward, all publicly resolvable subdomains and endpoints will use clear, descriptive names. Internal project codenames will remain internal.
Reviewing through a perception lens. We are adding a step to our security review process that explicitly evaluates not just "is this secure?" but "could this appear insecure to a reasonable external observer?" Persona handles sensitive identity verification data. We owe it to our customers and their end users to hold ourselves to a standard that accounts for perception, not just technical risk.
A note on trust
We recognize that trust in a company like Persona isn't built by writing a single blog post. It's built over time, through consistent transparency and action. We handle something deeply sensitive: helping verify that people are who they say they are. That responsibility demands that we engage openly when we make mistakes.
We're grateful to @vmfunc, @MDLcsgo, and @DziurwaF for the research that brought this to our attention. While we disagree with some of the characterizations in the post, the core finding, that source maps were exposed on a subdomain they shouldn't have been exposed on, was accurate, and it prompted improvements that make us better. We appreciate how they engaged with us directly and in good faith, transparently, candidly, and with the shared goal of improving security.
This post is focused on addressing the technical findings of their research. We are continuing to engage with the group to provide clarity on the remainder of their questions. We look forward to their future publications both about Persona and more.
We will continue to invest in transparency about how our systems work, how we handle data, and how we respond when things go wrong. If you have questions, we welcome them.