⚠️ JavaScript is Disabled. Please enable JavaScript to enjoy the games on Foony.
How I built a Vite plugin using Import Maps to prevent unnecessary file re-hashing when dependencies change, solving a critical production deployment issue.
How I Solved Cascading Hash Changes with Import Maps
Howdy! I've had this problem for 5+ years, but only now decided to tackle it because it got to a point I can no longer ignore. When I changed a single character in one file, half the JavaScript files in my build would get new hashed filenames, even though their actual content hadn't changed. This was causing unnecessary cache invalidation, made it nearly impossible to track what actually changed between builds, and worst of all: breaking my Cloudflare Pages builds because of a file limit.
Below I'll break down the problem, why existing solutions didn't work for me, and how I built a custom Vite plugin using Import Maps to solve it once and for all.
The Problem: Cascading Hash Changes
Vite uses content-based hashing for production builds. When you build your app, each JavaScript file gets a hash in its filename based on its content. If button.tsx compiles to button-abc12345.js, and the content changes, it becomes button-def45678.js. This is great for cache busting—users get the new file when it changes.
The problem comes when File A imports File B. Let's say you have:
// main.js
import { Button } from "./button-abc12345.js";
When button.tsx changes, Vite generates button-def45678.js. But now main.js also changes because it contains the string "./button-abc12345.js", which is now wrong. So main.js gets a new hash too, even though the actual logic in main.js didn't change at all.
This cascades through your entire dependency graph. Change one utility function, and suddenly half your js files get new hashes. In my case, changing a single character in useBackgroundMusic.ts caused over 500 files to be re-hashed.
The real-world impact was significant. We bundle 8 versions of our past build's assets so that users on slightly stale versions of our client can still run their version when we deploy the new version to Cloudflare Pages. However, Cloudflare Pages has a 20,000 file limit which we started hitting because of our i18n change earlier which exploded how many files we're creating.
Solving cascading hashes allows us to store far more past builds without hitting these limits because now most files no longer need to change. This also reduces the likelihood that a user on a stale build will error out, since it's far more likely they'll be requesting a now-unchanged file that we happen to have.
Why Not [Alternative Solutions]?
When I first looked at solving this, I considered a few approaches. None of them quite fit.
Post-build Scripts
My initial thought was to write a post-build script that would normalize all the import paths, re-hash the files, and update the references. This seemed straightforward—just regex replace the hashed filenames with stable names, then recompute hashes.
I rejected this approach because of "Heisenbugs" and cache poisoning concerns. Even though we store past builds in Cloudflare Pages, the risk of cache inconsistencies wasn't worth it. A script that modifies files after the build could introduce subtle bugs that only appear in production, and debugging those would be a nightmare.
Vite manualChunks
Another option was to use Vite's manualChunks configuration to separate stable code (like node_modules) from unstable code (business logic). The idea was that vendor code would change less frequently, so fewer files would cascade.
This doesn't actually solve the root problem—it just mitigates it. You still get cascading hashes within your business logic chunks. I wanted a solution that addressed the core issue, not just made it slightly less bad.
Import Maps: The Modern Solution
Import Maps are a browser-native feature (with polyfill support for older browsers) that decouples module specifiers from file paths. Instead of File A importing "./button-abc123.js", it imports "button". The browser uses the import map to resolve "button" to the actual hashed filename.
This is exactly what I needed. File A's content stays identical (it always imports "button"), so its hash stays the same. Only the import map and the changed file get new hashes. I was kinda shocked no one had already made a good plugin for this!
The Implementation Journey
I decided to build a Vite plugin that would:
- Transform all relative imports to use stable module specifiers
- Generate an import map that maps those specifiers to the actual hashed filenames
- Inject the import map into the HTML
The plugin is now available on GitHub: @foony/vite-plugin-import-map
Initial Approach
I started with a Vite plugin using the generateBundle hook. My first attempt used regex to find and replace import paths. This was easy to code and worked for our small team Foony, but was brittle and definitely wouldn't work in a plugin where there might be false-positives that get mutated.
The regex approach had obvious problems: what if a string in the code happened to look like a filename? What about dynamic imports? What about export statements? I needed a more robust solution if I was going to build a plugin for others.
AST Parsing
I needed to parse the JavaScript code properly to find all import statements. My first attempt was es-module-lexer, which is specifically designed for parsing ES modules. Unfortunately, it caused native panics during Vite's module analysis phase. Even trying the asm.js build didn't help stop the panics.
I settled on Acorn, a fast, lightweight, pure JavaScript parser. Combined with acorn-walk for AST traversal, it gave me everything I needed without the native dependency issues.
Key Challenges Solved
Handling All Import Types
Imports come in many forms, and they're treated differently in the AST. I needed to handle:
- Static imports:
import x from "./file.js"
- Dynamic imports:
import("./file.js")
- Named re-exports:
export { x } from "./file.js" (I initially missed this one!)
- Re-export all:
export * from "./file.js"
The re-export case was particularly tricky because I missed it until I saw a file that wasn't being transformed. The code had export{PoolBalls,PoolCues,PoolTables}from"./Items-Bd_KmSuk.js" and my plugin was completely ignoring it because I was only looking for ImportDeclaration and ImportExpression nodes.
Here's how I handle all of them now:
walk(ast, {
ImportDeclaration(node: any) {
// Static imports: import x from "spec"
const specifier = node.source.value;
// ... transform logic
},
ExportNamedDeclaration(node: any) {
// Named exports with source: export { x, y } from "spec"
if (!node.source?.value) return;
// ... transform logic
},
ExportAllDeclaration(node: any) {
// Export all: export * from "spec"
if (!node.source?.value) return;
// ... transform logic
},
ImportExpression(node: any) {
// Dynamic imports: import("spec")
// ... transform logic
},
});
Deterministic Conflict Resolution
When multiple files have the same base name (like multiple index.tsx files in different directories), I need to disambiguate them. I can't just use "index" for all of them.
My solution: if there's a conflict, I hash the original source path plus the base name. For example, src/client/games/chess/index.tsx:index gets hashed to create index-abc123. This ensures that the same file always gets the same module specifier across builds, even if other files with the same name are added or removed.
I use chunk.facadeModuleId (the entry point) as the primary identifier, falling back to chunk.moduleIds[0] if that's not available. This gives me a stable source path for deterministic hashing.
Source Map Chaining
When I transform the code, I'm breaking the source map chain. The existing source map maps from the original TypeScript source through Babel and minification to the current code. My transformations add another layer, so I need to preserve that chain.
I use MagicString to track my transformations and generate a new source map. Then I merge it with the existing map by preserving the original sources and sourcesContent arrays. This maintains the full chain: Original Source → (existing map) → Transformed Code.
const existingMap = typeof chunk.map === 'string' ? JSON.parse(chunk.map) : chunk.map;
const newMap = magicString.generateMap({
source: fileName,
file: newFileName,
includeContent: true,
hires: true,
});
// Merge: use new map's mappings but preserve original sources
chunk.map = {
...newMap,
sources: existingMap.sources || newMap.sources,
sourcesContent: existingMap.sourcesContent || newMap.sourcesContent,
file: newFileName,
};
Re-hashing Transformed Content
I need stable file content. To do this, I transform the imports (replacing Vite's hashed imports with my stable imports), and then I strip source map comments from the hash calculation (they reference old filenames).
After that, I compute a new hash, and update both the filename and the import map entry.
The Final Implementation
The plugin uses a four-pass strategy:
- Count pass: Detect name collisions by counting how many files share each base name
- Map pass: Create the chunk mapping (hashed filename → module specifier) and initial import map
- Transform pass: Rewrite import paths in the code, recompute hashes, update source maps
- Rename pass: Update bundle filenames and finalize the import map
Here's the core transformation logic:
import {simple as walk} from 'acorn-walk';
// Parse the code to get an AST
const ast = Parser.parse(chunk.code, {
ecmaVersion: 'latest',
sourceType: 'module',
locations: true,
});
const importsToTransform: Array<{start: number; end: number; replacement: string}> = [];
// Traverse the AST to find all imports/exports
walk(ast, {
ImportDeclaration(node: any) {
const specifier = node.source.value;
const filename = specifier.split('/').pop()!;
const moduleSpec = chunkMapping.get(filename);
if (moduleSpec) {
importsToTransform.push({
start: node.source.start + 1, // +1 to skip opening quote
end: node.source.end - 1, // -1 to skip closing quote
replacement: moduleSpec,
});
}
},
// ... handle other node types
});
// Apply transformations in reverse order to preserve positions
importsToTransform.sort((a, b) => b.start - a.start);
for (const transform of importsToTransform) {
magicString.overwrite(transform.start, transform.end, transform.replacement);
}
For injecting the import map into HTML, I use Vite's tag injection API instead of regex manipulation:
transformIndexHtml() {
return {
tags: [
{
tag: 'script',
attrs: {type: 'importmap'},
children: JSON.stringify(importMap, null, 2),
injectTo: 'head-prepend',
},
],
};
}
This is much more reliable than trying to regex-match HTML tags.
By the Numbers
To give you a sense of what this plugin does:
- ~1,000+ JavaScript files processed per build
- ~2-3 seconds added to build time (acceptable trade-off)
- ~99% reduction in unnecessary hash changes (most files now only change when their actual content changes)
- ~340 lines of plugin code (including comments and error handling)
The plugin handles all the edge cases I've encountered so far, and the build process is now much more predictable.
Lessons Learned
Why AST parsing is essential
Regex on bundled code is dangerous. If a string in your code happens to look like a filename, regex will rewrite it. AST parsing ensures you only transform actual import/export statements.
Why Acorn over es-module-lexer
es-module-lexer is faster and more purpose-built, but the native panic issues made it unusable in my Vite plugin context. Acorn is pure JavaScript, which means no native dependencies to worry about. I'll want to look at es-module-lexer in the future as a speed optimization, but for now Acorn works perfectly.
Why Import Maps over alternatives
Import Maps are a web standard with native browser support. They're the "right" way to solve this problem. The polyfill (es-module-shims) handles older browsers (e.g. Safari < 16.4) gracefully, and the solution is clean and maintainable.
Conclusion
The Import Maps plugin successfully prevents cascading hash changes in my Vite builds. Files now only get new hashes when their actual content changes, not when their dependencies change. This makes builds more predictable, reduces unnecessary cache invalidation, and helps us stay under Cloudflare Pages' file limits.
The solution is simple, maintainable, and uses modern web standards. It's a good example of how sometimes the "right" solution is also the simplest one—once you understand the problem deeply enough to see it.
The plugin is open source and available on GitHub: @foony/vite-plugin-import-map. You can install it with npm install @foony/vite-plugin-import-map and start using it in your own Vite projects.
Future improvements might include optimizing with es-module-lexer once the native panic issues are resolved, or adding support for more complex import scenarios. But for now, the plugin does exactly what I need it to do.
And who knows? Maybe someday Vite will natively support something like this.