The Need for Speed
Performance is an often forgotten consideration when designing and building cloud services.
The consequences, however, are significant.
Time and time again, problems seemingly appear out of the blue. Teams are often confused and unsure what the cause is, so they inevitably search for a single reason that, once resolved, will set things right.
In our experience, there is no single reason. There is no single fix that will resolve the situation. These issues inevitably stem out of an aggregation of a large number of small problems, which compound to form one big problem that is not only difficult to identify but exceptionally difficult to resolve.
We've seen this time and time again, and so, when we set out on our mission to build the worlds best API platform, we wanted to resolve this issue once and for all - and we did
Our design technologies automatically indexes your database to ensure optimal performance of every query within the system.
Overcoming the Effects of Network Latency
Network latency is something you can't control, but you can manage it with good design.
We think about latency like a sort of networking tax. Every time you send a request to a cloud service, you have to pay the tax. This networking tax makes things expensive (in terms of time), and the more times you have to pay the tax, the slower your app becomes. To make things worse, this networking tax isn't charged at a flat rate. It's based on the distance between the user and the cloud service.
Unfortunately, if the design of your cloud service requires you to make multiple sequential requests to it to get all the data you need, then there is very little you can do to make the app fast.
The moral of the story is simple. If you want to pay as little networking tax as possible, you have to make as few network requests as possible.
It sounds simple, but it's actually quite complex. Put simply; the solution is to package all of the required data in a single response without requiring custom endpoints for each app.
By taking this approach, your app remains performant no matter where your users are located, and the effects of latency are minimised.
Every cloud service that we design and build has this capability built into its core.
Edge Distribution for Ultra-low Latency
Whilst the technique described above should meet the performance requirements of almost all apps, some apps may require ultra-low latency cloud services. In those cases, the solution is to implement edge services in multiple locations worldwide and to direct users to their nearest service implementation using latency based DNS routing.
Reach out to us if your cloud service requires Edge Distribution, and we'll gladly go through the details with you.