A guide to modern load balancing: how to find the right solution

The idea behind load balancing is straightforward enough. It’s simply spreading traffic as well as the accompanying workload across a number of servers within a network instead of expecting one server to handle all of the traffic a website gets. It’s kind of like turning Christmas dinner into a potluck instead of expecting one host to prepare, serve and clean up a gigantic meal on their own.

Investing in a load balancing solution isn’t so simple, however. In many cases the costs may end up outweighing the benefits, and in other cases the benefits are hampered by the way the solution actually works.

The benefits of balance

The more servers available to do the necessary work, the more quickly and efficiently the work gets done, and the less work any one server has to do. This increases the overall efficiency of the network, minimizing downtime while maximizing performance and reliability.

There’s another inherent benefit to load balancing, which is that it offers protection against DDoS attacks. It’s much harder to overwhelm a targeted website with malicious traffic when that malicious traffic is spread over a multi-server environment.

How a load balancer works

With a load balancing solution, all requests to a website go to the load balancing server. This server contains the IP addresses of the network of actual servers that will be sharing the traffic and workload. The load balancing server either acts as a proxy between a website’s servers and website’s users, accepting all traffic, or as a gateway that assigns each user to a server and then removes itself from the interaction thereafter.

The requests that come to the website are redirected by the load balancing server according to the load balancing algorithm that has been chosen by the administrator.

Common load balancing solutions (and their shortfalls)

There are a number of solutions that have historically been employed for load balancing, however with the new challenges brought on by the current reality of the internet, these solutions aren’t quite cutting it.

Hardware: Hardware load balancing used to be based on a hardware load-balancing device (HLD). These single-function devices are largely being replaced by application delivery controllers, or ADCs, which use server-based hardware and have a number of functions for optimizing for enterprise applications including data center resource use, reliability, security and end-user performance.

While ADCs are a marked improvement over HLDs, ADCs suffer from a single point of failure and may bottleneck traffic if they aren’t configured or maintained correctly. They’re really only a viable solution for organizations that have the dedicated staff to setup and maintain these solutions, as well as the capital to take care of the high maintenance overhead. Regardless, both HLDs and ADCs are widely considered aging technology.

Software: As one might expect, software load balancing solutions are based on software, which can be used as part of an operating system, implemented with a DNS solution (see below) or as a component of a virtual service and application delivery solution, which is a more common option today. Software-based options are often touted as less expensive options compared to hardware-based.

The issue with these software-based solutions is that regardless of implementation, they still require hardware to run on, as well as intensive setup and maintenance, which not only bumps up the total cost of ownership but requires organizations to work with multiple vendors which can lead to compatibility issues. Due to the expense and logistics, these solutions are often beyond the means of many organizations.

DNS: DNS load balancing is a simple approach in which an administrator establishes load balancing pools for different geographic regions, allowing the solution to enhance site performance by reducing the distance between users and data centers. This can be effective for simple websites but otherwise has notable limitations.

For instance, DNS load balancing uses the most basic load balancing algorithm, Round Robin, which sends requests to servers based on the order presented in a list of servers. This is a problem because DNS records lack failure detection and without a third-party monitoring solution will direct users to servers that are down simply because they’re next in the order.

Open Source: Open source-based load balancing is load balancing that relies on open source software and other open source components. This offers a cost-related advantage, to be sure, but also several distinct disadvantages. These include the lack of support and service that tend to accompany open source solutions – making highly expert in-house staff necessary, and expensive requirements for hardware to run on.

Where to look instead

In order to maximize the accompanying benefits of a load-balancing solutions, organizations may want to consider a content delivery network [click here to learn more about CDN]. This is a global network of proxy cache servers designed to improve website speed and performance through front end optimization and network performance optimization, and because it’s a multi-server environment it provides built in global server load balancing.

Otherwise, organizations will want to look to the cloud for their load balancing in order to reduce costs and eliminate hardware requirements as well as complicated configuration and maintenance. Leading cloud-based solutions also typically feature both performance-based and geo-targeting algorithms, accurate health checks and instant implementation of routing changes.

There are different levels of quality when it comes to load balancing, just as there are with potluck Christmas dinners. However, no one has ever cost an organization major money or caused major grief by bringing a bag of Doritos as their contribution to a dinner party. Choose wisely.