⚠️ JavaScript is Disabled. Please enable JavaScript to enjoy the games on Foony.
How I fixed Three.js WebGL shaders broken by proxy string replacement and simplified Foony's domain strategy (foony.com only) to improve proxy compatibility.
How I Made Foony Work Behind Proxies
Howdy! I've known web proxies cause compatibility issues for websites for a long time. However, Foony's support in proxies has been notoriously bad, and solving Foony's proxy compatibility was fairly tricky.
This isn’t a “Foony uses exotic APIs” issue either (even though we do). It was a combination of:
- Proxies doing aggressive string rewriting in places they absolutely shouldn’t.
- Proxies treating the main site domain differently than “other” domains (CDNs, asset hosts, etc.).
- And the harsh reality that some proxies just can’t support modern web apps (HTTPS correctness, WebSockets, etc.).
We don’t work with every proxy, but we now work with at least croxyproxy and proxyorb, which was the goal.
Below I explain what broke, why it broke, and the fixes that actually mattered.
Pass 1: Valid, broken Three.js shaders
The symptom
When I tried croxyproxy, I wasn't able to play 8 Ball Pool or any of Foony's other three.js games.
I kept getting a shader compilation failure in Three.js with errors like:
- “Shader Error 1282 - VALIDATE_STATUS false”
That message was almost entirely useless. It usually means “your shader is invalid, good luck.” Great. If you ever wonder why I always use unique error messages for every single error on Foony, this is why. It helps pinpoint problems instead of just "code's broken, go fix."
But why were perfectly valid three.js shaders breaking? What gives?
The actual cause: proxies corrupting layout(location = N)
Three.js emits GLSL with layout qualifiers like:
layout(location = 0) in vec3 position;
Some proxies try to rewrite anything that looks like the JavaScript location API by doing naive global string replacement. That’s already bad in JS, but they were also doing it inside shader source strings. I guess AST parsing is too expensive for them.
So the shader source got corrupted into something like:
layout(__cpLocation = 0) in vec3 position;
The identifier must be location there. Anything else is invalid GLSL, and the compiler rejects it. (Layout Qualifiers in GLSL)
This is a Three.js problem only in the sense that Three.js generates shaders dynamically, and we pass them to WebGL at runtime. The real bug is the proxy’s rewriting strategy.
Why I didn’t “fix the proxy”
A naive approach would be to search for croxyproxy's location replacement string, __cpLocation, and replace it with location. However, different proxies use different replacement names. Some use __cpLocation, others use other weird identifiers. So hardcoding a fix like “replace __cpLocation back to location” is fragile.
I needed:
- A generic fix (no hardcoding proxy identifiers).
- A fix that works even if the proxy is rewriting the word
location in my JavaScript too.
The base64 trick: hiding the word location from the proxy
If the proxy rewrites every literal location it sees, the simplest move is to just not use location. Easy enough. I've seen tricks for this before in Lua when I reverse-engineered RestedXP's guide protection system (if I remember correctly, they obfuscate their usage of BNGetInfo, e.g. _G("\x42\x4E\x47\x65\x74\x49\x6E\x66\x6F")).
This trick works in JavaScript too, of course. In client/index.html, I decode the following at runtime:
// Because these proxies try to replace every `location`, we use a base64 encoded string.
const suffix = 'pb24=';
const locStr = atob('bG9jYXR' + suffix); // "location"
const loc = window[locStr]; // window.location
That atob() happens after the proxy has already done its HTML/JS rewriting, so it can’t “pre-corrupt” the string. I split the string into two to make it even harder to detect, and I use 'atob' because I can, but String.fromCharCode or hex-escaping window['\x6c\x6f\x63\x61\x74\x69\x6f\x6e'] might also work.
The broken shader pattern is always structurally the same:
layout(<something> = <number>)
So I match that generically and replace <something> with the correct identifier:
source.replace(/layout\s*\(\s*[^=)]+\s*=\s*(\d+)\s*\)/g, 'layout(' + locStr + ' = $1)');
The WebGL hook: patch shaderSource (WebGL1 + WebGL2)
Since Three.js calls gl.shaderSource(shader, source), I patch shaderSource itself:
const originalShaderSource = WebGLRenderingContext.prototype.shaderSource;
Object.defineProperty(WebGLRenderingContext.prototype, 'shaderSource', {
value: function (shader, source) {
return originalShaderSource.call(this, shader, fixCorruptedShaderSource(source));
},
writable: true,
configurable: true,
});
And I apply the same patch to WebGL2RenderingContext if it exists.
Once that was in place, the shader compilation errors vanished. At this point, croxyproxy was working but proxyorb was still failing. Why?! Shouldn't that work the same way?
Pass 2: The second problem (domains) and why removing foony.io made everything easier
Foony historically used two domains, at least for the past month:
foony.com for the main site
foony.io for static assets
The original reason was practical: serving assets from a cookie-less domain avoids cookie header upload bloat on every static file request. This is great, but not as necessary as you'd think given that HTTP/2 uses HPACK to reduce bytes sent over the wire for headers.
It's a valid optimization in normal browsing.
Behind proxies, it became a major source of breakage. And Foony's userbase loves proxies. sigh
Proxies treat “the main site” differently than “other sites”
Many proxies are optimized for “proxy this one page / domain.” They’ll successfully load the main HTML, inject scripts, register their own ServiceWorker, etc.
But when the app starts pulling assets from a different origin (like foony.io), you get into all sorts of fun breaky-ness:
- ServiceWorker interception failures like:
- “ServiceWorker intercepted the request and encountered an unexpected error”
- “Loading failed for the module with source”
- Query params being required by the proxy infrastructure (and easy to accidentally strip).
- Asset requests losing the proxy’s internal routing metadata.
- Weird proxies that replace the entire request with
generic-php-slug.php?someQueryParam=hugeEncodedString (yeah, I didn't bother supporting that).
These proxy’s internal mechanisms depend on query params / URL encoding, and they're quite fragile.
One of the telltale examples was an asset URL like:
https://<proxy-ip>/assets/firebase-<hash>.js?__pot=aHR0cHM6Ly9mb29ueS5jb20
That ?__pot=... is the proxy’s own routing/state which tells the proxy which domain the request is for. If you strip it, proxies can’t resolve the request correctly, and you end up with the ServiceWorker error path.
“Resource swapping” to the rescue (and why it got complicated fast)
At one point, I tried a workaround: detect “we’re proxied,” then swap any foony.io resource URLs to the current origin so the proxy would see everything as same-origin.
That sounds reasonable, and it worked for croxyproxy, but it added a lot of complexity:
- You need to replace
link and script tags that already exist in the HTML.
- You need a
MutationObserver to handle dynamically-injected tags (modulepreload, stylesheet, etc.).
- You have to preserve the proxy’s query parameters, or you break their routing. And different proxies do this differently. Because of course they do.
- And you still have to keep the logic generic (no proxy-specific globals) so that the code doesn't become a bloated dumpster-fire.
This is also where the “base64 trick” came up again: even in my own JavaScript, I had to be careful about the literal string location because the proxy might rewrite it.
Reverse-engineering CroxyProxy’s injected script
At this point I got curious: what is the proxy actually doing to my page? Is it injecting its own ads? Something worse?
CroxyProxy’s client-side script is heavily obfuscated.
(new Function(new TextDecoder('utf-8').decode(new Uint8Array((atob('NjY3NTZlN...')).match(/.{1,2}/g).map(b => parseInt(b, 16))))))();
Which when run, results in:
function a0_0x5ebf(_0x213dc9,_0x1c49b6){var _0x4aa7c1=a0_0x4274();return a0_0x5ebf=function(_0x159600,_0x51d898){_0x159600=...
Based on this, it looks like croxyproxy is using Obfuscator.io for this obfuscation. Which is thankfully easy enough to deobfuscate with webcrack.
This results in much more readable JavaScript:
((_0x15ca2c, _0x489eaa) => {
if (typeof module == "object" && module.exports) {
module.exports = _0x489eaa(require("./punycode"), require("./IPv6"), require("./SecondLevelDomains"));
} else if (typeof define == "function" && define.amd) {
define(["./punycode", "./IPv6", "./SecondLevelDomains"], _0x489eaa);
} else {
_0x15ca2c.URI = _0x489eaa(_0x15ca2c.punycode, _0x15ca2c.IPv6, _0x15ca2c.SecondLevelDomains, _0x15ca2c);
}
})(this, function (_0x724467, _0x275183, _0x219d84, _0x2dcc2c) {
var _0x114c1e = _0x2dcc2c && _0x2dcc2c.URI;
function _0x5a9187(_0x2fd4ea, _0x3bd460) {
var _0x23a83f = arguments.length >= 1;
if (!(this instanceof _0x5a9187)) {
Nice. Now we can see what it's doing. And... it seems mostly fine. I think the obfuscation is mostly to help prevent detection of the proxy. Mostly.
We've got some ad / UI injection:
Bi(_0x308e2f) {
console.log("Ads: " + _0x331b11.showAds);
console.log("Ad codes: " + !!_0x331b11.adsJson);
console.log("Adblock: " + _0x308e2f);
_0x34fa18.document.body.insertAdjacentHTML("afterbegin", _0x331b11.modal);
if (_0x331b11.header) {
if (_0x331b11.header) {
this.Ri(_0x308e2f);
}
[...document.querySelectorAll("#__cpsHeader a")].forEach(_0xcce595 => {
_0xcce595.addEventListener("click", function (_0x3ef5f1) {
if (this.target === "_blank") {
_0x34fa18.open(this.href, "_blank").focus();
} else {
_0x34fa18.location.href = this.href;
}
_0x3ef5f1.stopImmediatePropagation();
_0x3ef5f1.preventDefault();
}, true);
});
}
return this;
}
There's some other spots, but basically it's just showing ads, including pop-under style ads. It also uses FuckAdBlock.
The real replacement for the strings, however, is happening server-side. And who knows what all that's doing.
Either way, you should absolutely not be using web proxies if you care about your account security. If you must, avoid entering any of your PII / account / purchase information.
"Resource swapping" in the bin
I decided that the complexity from resource swapping, coupled with the complexity in other parts of the code for foony.io support, wasn't worth the small network savings of beautiful, cookieless requests. We were also seeing an unexplained drop in our gameplay conversions since adopting foony.io, so I suspect there were other issues with foony.io we weren't aware of.
So I removed foony.io. At least for now.
Once I deleted the foony.io CDN logic and standardized everything on foony.com, proxy support got dramatically simpler:
- Same-origin asset loads.
- Fewer “special cases” to explain to a proxy ServiceWorker.
- Less rewriting.
- Less fragile code.
In short, removing foony.io was an architectural simplification that reduced the surface area for weird proxy behavior.
Pass 3: What works, what doesn’t, and why
Confirmed working proxies
At this point, Foony works behind:
Some other proxies probably work. I bet most still don't. But at least the important ones that people use to play games on seem to work.
Why not “all proxies”?
Some proxies simply can’t support a modern multiplayer web app. Examples:
- Proxies that don’t properly support HTTPS.
- Proxies that break or block WebSockets (Foony uses real-time networking). Technically you could work around this, but it'd add complexity.
- Proxies that have too many restrictions around cross-origin requests, headers, or ServiceWorkers.
Key takeaways
Web proxies very insecure
They’re middleware that:
- rewrites HTML
- rewrites JavaScript
- sometimes injects a ServiceWorker
- and often depends on query params / URL encoding to route requests
- can fiddle with your pages in any number of ways
I was surprised just how deep some proxies go: they’ll rewrite shader source strings, comments, and God knows what else.
Sometimes the best fix is architectural
The WebGL patch made the games render again, but removing the multi-domain CDN strategy made proxy support stay stable.
It’s a good reminder: clever optimizations can be perfectly reasonable until they collide with hostile middleware. Or user's browser extensions. Or Safari. Or language settings. Or accessibility features. Or solar flares. Or anything, really.
Conclusion
Foony now works behind the proxies that matter (croxyproxy and proxyorb), without turning the codebase into a proxy-specific mess:
- A generic Three.js shader fix (no proxy-specific identifiers).
- A simpler domain strategy (foony.com everywhere).