overloaded capacity

One of the trickiest discussions when planning and developing a new digital product is capacity. “Prepare for success,” is an adage popularly thrown around. But for any project manager, that statement is as murky as typhoon flood water.

Well, sure — for more controlled setups like development phases, it’s fairly easy to make estimates, but that’s only because it’s easy to know or estimate how many people will be using the infrastructure at any given time. However, planning for capacity for tech products, especially those that rely on backend services, can be a real pain.

Also Read: Branding basics: 6 steps to an effective e-commerce branding strategy

Even the big and experienced firms don’t always get it right. Gaming giant Blizzard has decades of experience managing server deployments for authentication and multiplayer support for their games. Despite that, they have had a history of servers crashing during launch. Even with Overwatch, their latest award-winning offering, players suffered connectivity issues during launch. And this happened despite significant testing.

Surely, servers crashing due to wild popularity is a type of problems that is nice to have — one that can readily be remedied if you have resources (and sales) to fix it as it happens.  For game releases, overloaded servers appear to be normal upon product launch.

Some marketers would even spin this as a sign of a hit. But not all products can survive crashes during launch. It’s a different issue for independent developers and small ventures relying on bootstrapping their way to shipping a product.

The capacity conundrum

Surely there are formulas available to compute for capacity. A simple test can be done by seeing how many users can connect to a server and maintain an acceptable level of use. For close-to-deployment scenarios, you can do alpha and (closed and open) beta tests.

One of the first capacity plans I was involved in was for a website designed to deliver browser-based casual games through subscriptions. This was before the days of cloud computing, leaving us with two choices: putt up our own server, or rent at a datacenter. Our basis was the commercial team’s estimate that the site will have 5,000 concurrent users during peak hours. That was supposed to simplify our task.

We simulated such usage and optimised what we can in the backend. Finally, we decided to rent out a server with specifications that can take the load on plus a bit extra. The setup held up extremely well during launch and months after that. Unfortunately, that was all because we didn’t make enough sales to even reach half the server’s capacity.

Also Read: Advice for startup freshmen: How to develop a content marketing strategy that works

This could’ve been a tragic story if not for some positives. It was a good thing that we only rented the servers with a monthly contract so we were able to scale down immediately when the sales projections went sour. If we had opted for longer-term contracts or put up the infrastructure ourselves, we could have been thousands of dollars in the hole. Sales eventually picked up, so the effort wasn’t entirely a loss.

Better choices with the cloud

Looking back, I wonder if cloud computing would’ve solved our problem back then. Cloud computing supposedly takes the guesswork out of capacity since you could literally start with a small chunk of computing power and storage and scale on demand once traffic picks up or slows down. Amazon Web Services pretty much has most hosting requirements covered.

Engineers could go on all day arguing about the merits of dedicated machines vs. cloud, but what’s key here is that digital startups now have options and choices on how to put up their infrastructure. With infrastructure-as-a-service now gaining popularity, development teams can opt for hybrid setups, which involve cloud, on-premises deployments, or a hybrid of both — computing load can even be dynamically allocated through load balancing, which ensure that applications can scale and still be available without downtimes concerns.

But there’s still the cost issue. If instantaneous scaling isn’t an issue, a strong case could be made for opting with dedicated server setups, in terms of cost. And it’s always a unique case for us in Southeast Asia. In the Philippines, servers with local data centres can cost twice as much as rented servers in the US with similar hardware specifications. Putting up one’s own data centre also comes with expensive hardware, utilities, and unreliable internet speeds. All of these things need to be considered before making the investment.

Never lose sight of the business end

It is common for many startups and development teams to be overly excited about the tech end that they fail to consider the business end. Sad to say, it’s quite a common occurrence for startups to fall into the trap of going all-out with tech investment without any realistic capacity plan tied to a commercial strategy.

Also Read: 6 effective ways to market your business on a shoestring budget

Case in point: How many development and design firms splurge on equipping all their staff with sleek Macs even if custom PCs cost a fraction just because Apple products make them look legit? Thousands of startup dollars have been blown on wasted on such decisions.

The flexibility offered by the cloud shouldn’t be treated as a failsafe for not considering the market. Infrastructure investments should be made with a clear business strategy in place. While developers work on making the system and infrastructure work for X number of users, the commercial team should also be hard at work, converting X (and more) prospects into customers.


The views expressed here are of the author’s, and e27 may not necessarily subscribe to them. e27 invites members from Asia’s tech industry and startup community to share their honest opinions and expert knowledge with our readers. If you are interested in sharing your point of view, submit your post here.

Featured Image Copyright: thamkc / 123RF Stock Photo