Scaling a football simulation game to 8 million users.
Although useful at the beginning, the MVP showed quite a few problems very soon. On one hand development had slowed down to a crawl due to poor code quality, but the biggest problem was its architecture and design, which made it unable to scale to the demands of a hungry fandom. Every time Piqué tweeted about the game, thousands of fans came running to sign up, crashing the entire system. On top of that, their monetized features were too slow to be engaging to users, who often walked away in frustration.
That's when we came in. Once we stabilized the platform, and without stopping feature development at any time, we overhauled the whole system. We took the self-hosted monolithic Rails application, deployed with much fear once a week, to a rock-solid, multi-region AWS deployment that handled 8 million registered accounts, with thousands of concurrent users at any given time.
- Data Engineering
- Software Architecture
We built a multi-world, multi-region production system with millions of users and thousands of concurrent players.
To improve maintainability, we also ported the monolith to a well documented JSON API with a thick Ember client, which was paramount to the later development of iOS and Android clients.
We also rewrote their exclusive revenue generator, an auction system, from a slow feature with very low engagement to a real-time system that players loved using, all while setting business and production metrics to measure every step of the project.
Aside from working closely with the whole C-level as technology advisors, by the time we left we had hired a team of 10 developers, to whom we handed over a rock-solid production system and a whole set of best practices and guidelines to aid further development.
Having reached the right market early, and thanks to the strong brand that is Gerard Piqué, there was no shortage of paying customers for Golden Manager. Every ad and every social media campaign brought in thousands of users eager to play, and ready to spend real money on their virtual auction system, where users compete to acquire the most valuable football players in the game's world.
The self-hosted Rails monolith was trying its best to serve the increasing load, but with no in-house development team, and serious structural challenges in the architecture, it was far from stable. Deploys were infrequent and fearful of rocking the boat too much. One of the major challenges was the sign up and log in service, which handled users from all over the world and crashed consistently with every advertisement campaign, and even with Piqué's tweets about the game.
At the core of the game, the virtual auction system was the exclusive revenue generating feature. However, it was not anywhere near real-time, and too unreliable for users to enjoy it, so it had very low usage overall. On top of all this, the feature backlog kept growing, and there were no metrics in place to assess the health of the product.
The challenge of too much success
After stabilizing the platform through refactoring and low-hanging fruit, we began reworking the most critical aspects of the system, while setting up business and system metrics throughout. By the end, we had migrated everything to a multi-region AWS deployment. We also split the playable world in regions, load-balanced after login, to offer a customized experience to users in different parts of the globe.
One of the most important structural changes was gradually porting the Rails monolith to a JSON API with an Ember app in the frontend, page by page as to avoid a fearsome big bang rewrite and continue to deliver value —we successfully delivered dozens of features while making this kind of structural changes to the whole system.
Incidentally, decoupling the application with an API was paramount to developing native mobile applications later, when we worked hand in hand with the iOS and Android developers to tailor and optimize certain endpoints for mobile.
We finally rewrote the virtual auction system entirely into a real-time application with Websockets, Ember and Ruby, and soon the revenue started flowing in with highly engaged paying customers and great retention, competing with each other to acquire the best football players.
To tie everything together, we tracked every action in the system and built an ETL to ship it to AWS Redshift, where their Business Intelligence team could analyze it to drive decisions about the product going forward. We also set up data pipelines for the sales team to get an end-to-end view of the conversion funnel throughout the gaming experience.
Scaling with the right architecture
We had migrated everything to a multi-region AWS deployment. We also split the playable world in regions, load-balanced after login, to offer a customized experience to users in different parts of the globe.
When we started working with Golden Manager, there was no in-house team, as they still relied in the original agency that built the MVP to do bug fixes and feature development. We helped the CTO build a competent team from scratch by interviewing dozens of candidates, looking for the best fit.
Eventually we had hired and trained almost a dozen developers on our best practices, and the new architecture that we put in place. After we left, this productive team kept shipping features at a rapid pace, deploying multiple times a day without fear, and with a pervasive metrics-driven culture, where features and metrics go hand in hand.
Building a productive team from scratch
By the end of the project, Golden Manager was far from where it had started.
It now had a multi-world, multi-region production system with millions of users, and thousands of concurrent players at any given time —supported by a productive team built from scratch, deploying multiple times per day and guided by pervasive metrics.