A few questions seem to come up again and again from the people who’ve been reading my posts on queue theory. Perhaps, the most common question is: “How do I model multi-server applications using queues?”. This in an excellent question since most of us will be running production systems with more than one server, be that multiple collaborating services or just a simple load-balanced service that has a few servers sharing the same incoming queue of customers.
In this post, I want to address the simplest model for multiple servers: the queue. Like the queue I described in an earlier post, the queue has inter-arrival times exponentially-distributed with rate , and service rate exponentially-distributed with rate . The difference, which should be obvious, is that rather than having just one server, we can have any positive number.
The measure of traffic intensity for and queues is . For queues is also the measure of utilisation, but for queues we have utilisation . The stability condition for queues is .
What to model?
One of the most important questions we can answer is: what should be modelled as a multi-server queue? One reader asked whether a multi-threaded server is best modelled using an queue with equal to the number of threads. This is a tough question, but to answer we should consider the requirement that, for an queue, each of the servers must be indendent.
If we are modelling a coarse-grained service like a web server, then I think there’s enough interference between the threads to model each server process as an queue rather than as an process. Indeed, we might even go further and model each distinct machine as an queue, and only use an queue to model multiple machines serving the same stream of customers.
If we were modelling a low-level component like a thread scheduler, then we would likely use an queue, with equal to the number of CPUs, but at the coarse granularity of a web server, we can safely ignore the number of CPUs and threads and use an queue.
We’ll calculate the average latency of queues from the steady-state probabilities. As I did in the previous entries, I’m not going to discuss the derivation of these probablities (although I promise to do this in an upcoming post). Remember that the steady-state probabilities tell us the probability of there being customers in the system. We’ll start with :
For , we must account for two scenarios: when the number of customers is less than the number of servers (), and when the number of customers is greater than or equal to the number of servers ():
Probability of Waiting
If we plot this function for different values of , we can easily see how adding more servers to our system reduces the likelihood a customer will have to wait:
By the time we have four servers, the chance of waiting is barely noticeable, even when .
Multi-Server Wait Times
The average time spent waiting in the queue is:
From this we get the average latency quite easily:
If we plot average latency for various values of , we see how adding more servers is an effective way of reducing latency
Take note of the log scale on the y-axis. At , the queue is at 100% utilisation and latency is tending towards . The extra capacity with and is directly reflected in the significantly smaller latencies.
Faster Servers or More Servers?
When deploying an application, it’s interesting to consider whether a smaller number of faster servers is better than a larger number of slower servers. Ignoring any discussion of reliability, we can compare the latency of different queues to help us pick a configuration.
The plot below compares two queue models, one with and and the other with and .
As you might expect, the queue with the lowest service rate has a higher baseline latency. However, because there are more servers in that queue, the latency as increases remains steady. Recall the stability condition , and it should be apparent that more servers will result in longer periods latency stability when .
To see more configurations in action, I’ve created a small simulator that you can use to compare two different queue models.
Limitations of the model
The model is a reasonable way to model systems with multiple servers, but it has some limitations. Since the service rate is a global parameter, it is not possible to model systems that have different service rates per server. In a cloud scenario you might have a set of core servers - all with the same service rate - running all the time. During periods of heavy load, you might scale up with some additional resources, but these may well have a different service rate, especially if your base servers are especially beefy.
Another limitation with the model is that it doesn’t account for the overhead of splitting incoming traffic between the servers. In a web environment, the individual servers receive their load from some load-balancing infrastructure. The load balancer will also have a service rate describing how fast it can do its work.
In my next post, I’ll discuss addressing these weaknesses using queue networks. As the name implies, queue networks describe how individual queues are composed into collaborating networks. A web application running on two servers is described as a queue network with three nodes: one for the load balancer, and one for each of the servers.