⚠️ JavaScript is Disabled. Please enable JavaScript to enjoy the games on Foony.
How I translated Foony to 20 languages in just 3 days using a custom 3KB library and a swarm of AI agents.
How I Implemented i18n to 20 Languages in 3 Days
Howdy! I just finished a massive task where I translated Foony into 20 different languages. It was a huge undertaking that involved touching almost every file in the codebase, but I managed to get it all done in just 3 days.
Below I'll break down how I did it, the specific numbers behind the change, and why I decided to roll my own translation library (yet again) instead of using the industry standard.
Why not i18next?
When I first looked at adding translations, I considered the industry standard: i18next and react-i18next.
Instead, I decided to optimize for maintainability by AI. i18next is powerful, but its API variety can cause LLMs to hallucinate or write inconsistent code. By constraining the library to a simple t() and interpolate(), I ensured 10+ parallel agents could write 100% type-safe code with almost zero human intervention.
I was also wary of buying into a large ecosystem that might introduce breaking changes later. Having been burned by painful migrations like React Router v5 and MUI v4 → v5, I know that rapid breaking of backwards-compatibility is all too common in JavaScript-land. The cost of adding pluralization features later is lower than the cost of manually migrating 139k lines of code now.
I wanted something dead simple, extremely lightweight, and tailored exactly to my team's needs.
So I wrote my own.
I built a 3 KB constrained subset specifically designed to enable high-accuracy, autonomous AI refactoring. This allowed me to act as a single engineer accomplishing a 5-person team's 3-week workload in just 3 days.
The Custom Implementation
I came up with a minimal i18n library that sits at about 3 KB gzipped. It exposes two main functions: getTranslation() for non-React contexts and a useTranslation() hook for components.
These return t() for simple string replacement and interpolate() for when I need to inject React components into a translation string (like a link or an icon). Both functions support variable replacement, e.g. "Hello {{thing}}", {thing: 'World'}.
Here's the core t() function:
export function t(key: TranslationKeys, values?: Record<string, string | number>, locale?: SupportedLocale): string {
let namespace: string = '';
let translationKey: string = key;
// Check if key contains '/' - this indicates a namespace
const slashIndex = key.indexOf('/');
if (slashIndex !== -1) {
const parts = key.split('/');
namespace = parts.slice(0, -1).join('/');
translationKey = parts[parts.length - 1];
}
const targetLocale = locale ?? currentLocale;
const text = getTranslationValue(targetLocale, namespace, translationKey);
if (values) {
return interpolateString(text, values);
}
return text;
}
And the React hook:
export function useTranslation() {
const [language] = useLanguage();
return useMemo(() => ({
t: (key: TranslationKeys, values?: Record<string, string | number>) =>
t(key, values, language),
interpolate: (key: TranslationKeys, components: Record<string, ReactNode>) =>
interpolate(key, components, language),
}), [language, version]);
}
The core of the whole library is only about 580 lines of code. It handles:
- Lazy-loading translation files so we don't ship all 20 languages to every user.
- Code-splitting translations by "namespace" (e.g.
common, misc, games/{gameId}).
- A "debug" locale that shows the raw keys so I can verify everything is wired up correctly.
To ensure the system remains easy to maintain, I also added comprehensive documentation in shared/src/i18n/README.md, covering everything from file structure to usage examples for both client and server. Since I'm not using a standard library, having this reference is critical for onboarding new team members (or just reminding my future self or LLMs how it works).
By the Numbers
To give you a sense of the scale of this update, here is what changed in the codebase:
- 20 languages supported (plus a debug locale for dev).
- 360 locale files created.
- 139,031 lines of translation code.
- 3,938 calls to
t() added across the client.
- 728 source files modified.
- 18 English source files that serve as the source of truth (16 games + common + misc).
Orchestrating with Agents
Doing this manually would have taken months of mind-numbing, mechanical work. Instead, I orchestrated over a dozen Cursor agents simultaneously to do the heavy lifting.
I started by breaking the codebase down into "sections" based on folders. Each game on Foony got its own folder and its own translation namespace. This keeps the initial load size small since you only load the translations for the game you're playing.
I ran multiple Cursor agents simultaneously. I assigned each agent a specific section, like "convert the Chess game to use translations," and it went through file by file, finding user-facing strings and replacing them with t('games/chess/some.key').
The agent would then add that key to the appropriate English locale file with a JSDoc comment explaining the "what" and "where" of the string. This context is important when generating the translations for other languages, as it helps the LLM understand if "Save" means "Save Game Configuration" or "Save Your Draw & Guess Drawing".
Quality Control
I quickly reviewed all the code that was generated. The agents were surprisingly good, but they did make occasional mistakes, like putting the useTranslation hook after an early return statement.
Strongly-typed translations helped immensely. This ensured all translations for each locale had all the correct keys (and none of the wrong ones). It also ensured that calls to t() and interpolate() used real translation strings that existed.
The type system extracts all possible translation keys from the English source files:
/**
* Extracts all possible paths from a nested object type, creating dot-notation keys.
* Example: {a: string, b: {c: string, d: {e: string}}} → 'a' | 'b.c' | 'b.d.e'
*/
type ExtractPaths<T, Prefix extends string = ''> = T extends string
? Prefix extends '' ? never : Prefix
: T extends object
? {
[K in keyof T]: K extends string | number
? T[K] extends string
? Prefix extends '' ? `${K}` : `${Prefix}.${K}`
: ExtractPaths<T[K], Prefix extends '' ? `${K}` : `${Prefix}.${K}`>
: never
}[keyof T]
: never;
export type TranslationKeys =
| ExtractPaths<typeof import('./locales/en/index').default>
| `misc/${ExtractPaths<typeof import('./locales/en/misc').default>}`
| `games/chess/${ExtractPaths<typeof import('./locales/en/games/chess').default>}`
| `games/pool/${ExtractPaths<typeof import('./locales/en/games/pool').default>}`
// ... and so on for all games
This gives perfect TypeScript autocomplete, and any typo in a translation key is caught at compile time. The agents can't make mistakes like t('games/ches/name') because TypeScript immediately flags it.
Localization
Once the English conversion was done, I broke up the remaining locale tasks. I made each agent responsible for converting a single English locale file to a specified language.
For example, I gave the agents a prompt like this:
Please ensure that ar/games/dinomight.ts has all the translations from en/games/dinomight.ts.
Use `export const account: DinomightTranslations = {`.
Iterate until there are no more type errors for your translation file (if you see errors for other files, ignore them--you are running in parallel with other agents that are responsible for those other files).
Your translations must be excellent and correct for the jsdoc context provided in en.
You must do this manually and without writing "helper" scripts, and with no shortcuts.
I considered having Cursor create a script to feed each of these files into an LLM and have that generate things, but I wanted to save a bit on LLM cost. Using a script to only update missing translations was the better approach, and I'll probably use a similar solution in the future. I'd like to track which strings need updating / translation, but want to keep things simple. I might move the translation work to a database or something.
I also added a "debug" locale that is only available in development. This lets me view all replaced strings to verify things are working (plus I think it's cool). When you use the debug locale, t() returns the key wrapped in brackets:
if (targetLocale === 'debug') {
return `⟦${key}⟧`;
}
So instead of seeing "Welcome to Foony!", you'd see ⟦welcome⟧, making it easy to spot any missing translations.
Finally, another agent implemented /{locale}/** routing so things like /ja/games/chess would route to the correct language (in this case Japanese).
Translating the Blog
Translating the UI strings was one thing, but what about the blog posts? I didn't want to spin up and manage even more agents to translate all my blog posts.
I solved this by having an agent create a script (scripts/src/generateBlogTranslations.ts) that automates the entire process.
Here's how it works:
- It scans the
client/src/posts/en directory for English MDX files.
- It checks for missing translations in the other locale folders (e.g.
posts/ja, posts/es).
- If a translation is missing, it reads the English content and feeds it into Gemini 3 Pro Preview with a specific prompt to translate the content while preserving Markdown formatting.
- It saves the new file to the correct location.
On the frontend, I use import.meta.glob to dynamically import all these MDX files. My PostPage component then simply checks the user's current locale and lazy-loads the correct MDX file. If a translation is missing (because I haven't run the script yet), it gracefully falls back to English.
Conclusion
At this point, I had a fully-functioning site translated to all 20 locales!
This was a crazy 3 days, but the result is a fully localized site that feels (mostly) native to users around the world. By building a custom, lightweight library and leveraging AI agents for the tedious refactoring work, I managed what would've been impossible only a year ago: full i18n in 3 days for a complex website by 1 engineer. The future of programming isn't about writing code fast. It's about orchestrating AI agents and possessing the deep domain expertise to verify their output.