Cloudize Logo
Performance

Performance by design

Building high-performance APIs

Generally speaking, performance is not something you can easily retrofit to a system. It was either a design-level consideration, or it wasn't.

We've seen too many examples of APIs where developers solved the problem at hand without regard for production performance and then, post-deployment, found that performance was lacking. At that point, once consumers have integrated into the API and started consuming it, they find themselves in a horrible position with very few options to remedy the situation.

Time and again, problems seemingly appear out of the blue. Teams are often confused and unsure what the cause is, so they inevitably search for a single reason that, once resolved, will set things right. In reality, in our experience, there is no single reason. These issues inevitably stem from an aggregation of many minor problems, which compound to form one major problem that is difficult to identify and resolve.

We wanted to avoid this situation at all costs, and so, when we set out on our mission to build the world's best API development technology, we wanted to resolve this issue once and for all — and we did. Our Tesseract design technology automatically indexes your database to ensure optimal performance of every query within the system.

The net result is that every API we design and build is automatically pre-optimised without the crippling costs that usually relegate pre-optimisation to waste.

Addressing network latency

Network latency is something you can't control, but you can manage it with good upfront design.

We think about latency as a sort of networking tax. Every time you send a request to a cloud service, you have to pay the tax. This networking tax makes things expensive (in terms of time), and the more times you have to pay the tax to produce an output, the slower your app becomes. To make things worse, this networking tax isn't charged at a standardised flat rate. It's based on the distance between the consumer and the API, so a consumer on the other side of the world may be having an awful experience, but your developers probably won't even realise it as they're more than likely located close to your servers, and to them, performance seems to be acceptable (they're paying so little tax that it's insignificant).

Unfortunately, if your API implements a simple restful design, and you require multiple types of resources that are related in some nested way, you'll probably find yourself with one of two options:

  • make multiple sequential requests to retrieve all related resources (as required) and live with a slow app.
  • create BFF (backend for frontend) endpoints to mitigate the situation at the cost of additional software development and the associated costs and additional complexity as code starts to overlap.

The solution is simple, but impossible to retrofit.

If you want to pay as little networking tax as possible, you must make as few network requests as possible. This sounds simple, but it's actually quite complex. The solution is to package all required resources in a single composite response without requiring custom endpoints for each app.

By taking this approach, your app remains performant no matter where your users are located, and the effects of latency are all but eliminated.

Every cloud service that we design and build has this capability built into its core.

Related topics

Cloudize is a leader in API design and development. By leveraging our skills and technologies, you can radically accelerate your next innovation.

Are you ready to find out more?

Book your FREE Cloudize API Technology Introduction now