One can use "Round Robin", a feature that shuffles the resource records in a resource record set so that they occur in different orders, to achieve load balancing, fault tolerance, and traffic locality.
This is the Frequently Given Answer to such statements.
(You can read a different, and on one point flawed, approach to this same answer on the ISC's web page about load balancing and its own DNS server.)
No, one cannot.
Round-robin isn't a feature. It's nothing more than a bodge, and doesn't deserve to be promoted to the rank of feature. It attempts, without success, to produce the same sort of behaviour as would occur if the true features were properly implemented. However, it does not achieve any of the goals that one actually wants to achieve, because it is based upon premises that are actually false.
Round-robin has no real use. It's not even a poor man's load balancer.
The primary idea that underpins the notion of round-robin achieving anything is that the order of the resource records that content and proxy DNS servers use, when sending an A or an AAAA resource record set in their responses to DNS queries, will determine the order of the IP addresses that service clients attempt to connect to when using the service servers whose IP addresses have been looked up.
This tenet is false. The DNS protocol and the mechanics of query resolution simply do not work this way.
The DNS protocol imposes no ordering on the resource records within a resource record set. When handling a response datagram, a DNS client (including the DNS client that forms the back-end of a proxy DNS server) is under no obligation to preserve the ordering of the resource record set as it is received. (DNS clients may be configured to re-order resource record sets. The sortlist directive in BIND's DNS Client library, used by many applications, can be used to do this, for example. Indeed, in certain cases DNS clients are specifically obliged to re-order resource record sets. The DNS clients in SMTP Relay clients are obliged to re-order resource record sets, for example.) Moreover, a caching proxy DNS server is under no obligation to preserve the orderings of resource record sets held in its cache, or to preserve the orderings of resource record sets as received from content DNS servers (or upstream proxy DNS servers).
This completely defeats the idea that the sender of a DNS response datagram can specify a particular ordering of the resource records in a resource record set. Whatever ordering a content DNS server may specify in its responses can be overridden by a proxy DNS server (by its back-end re-ordering the resource record set as it receives the response, by its cache, and — most ironically — by whatever round-robin or other ordering "features" the proxy DNS server itself may have), and whatever ordering a proxy DNS server may specify in its responses can be overriden by the DNS client within the service client application itself (by its re-ordering the resource record as it receives the response, and by whatever round-robin or other ordering "features" the DNS client library itself may have).
There is no ordering on the resource records within a resource record set, and thus specifying an order cannot be used to convey information from DNS servers to service client applications.
A secondary idea that underpins the notion of round-robin achieving anything is that (temporarily assuming the primary tenet to hold true for the sake of argument) the shuffling of resource records with every DNS response ensures that service clients will connect to different service servers across multiple transactions and that different service clients will see and use different orderings for service servers.
This tenet is false. It is defeated by the existence of caching.
Caching occurs in caching proxy DNS servers. In the situation where a content DNS server is employing round-robin shuffling of resource records, the caching proxy DNS server will only receive a response (with the resource records in a different order) whenever its back-end issues a fresh query. But this, in turn will only occur whenever the previous resource record set has exceeded its TTL. Until that time, the cached resource record set, in whatever order it happens to be in, will continue to be used. Now, of course, there's no guarantee that the proxy DNS server will preserve the resource record set ordering that its back-end received from the content DNS server, but if it just so happens that it does all of the DNS clients querying that proxy DNS server will only see the ordering of the resource record set change whenever the cached copy expires and a fresh copy is fetched by the proxy DNS server's back-end from the content DNS server. Moreover, all of the DNS clients querying that proxy DNS server will see the same ordering. Again, of course, there's no guarantee that the DNS clients within the service client applications will preserve the resource record set ordering that they receive from proxy DNS servers, but if it just so happens that they do then all of the service clients whose DNS clients share that same proxy DNS server will see the same ordering.
Now consider the case of service client applications run by a large number of customers of a single large ISP, where the DNS clients on the customer's machines are configured to use that ISP's proxy DNS service (and to do no re-ordering or caching of their own — of which more later). If the ISP has a million customers actively and repeatedly using the service, then a million service clients will all be connecting to the service server IP addresses in the exact same order. And every TTL seconds, all of those million service clients will switch to a new order, and all hit on a different service server IP address first. Caching by proxy DNS servers (that preserve the received ordering) causes a disproportionate weight to be given to the resource record set orderings that happen to be seen by proxy DNS servers that serve a lot of clients.
To avoid this problem, people employing round-robin resource record set shuffling will set the TTL of the resource record set very low, to attempt to ensure that every connection to the service server is preceded by a fresh fetch of the newly shuffled resource record set from the content DNS server. However, this creates further problems of its own, and doesn't work anyway.
In theory, a TTL of zero prevents proxy DNS servers from caching a resource record set, and thus every use of a service server would be preceded by the DNS lookup for that server's IP address(es), passing all of the way through from the DNS client to the actual content DNS servers since nothing cached. Ironically, this often doesn't actually work and causes problems. TTLs of zero can cause DNS lookups to fail entirely. Hence it is common for a TTL no lower than 60 seconds to be used. However, low TTLs cause extra DNS lookup traffic, of course, using up more network bandwidth and putting a greater load on content and proxy DNS servers. Eliminating this overhead is why resource record sets are cached in the first place, after all.
Furthermore, TTLs are not always respected. This is because DNS clients can also perform caching. Even the applications themselves often perform caching, of the data that they receive from the DNS clients. Such caching often does not operate at the level of of DNS lookups and does not retain the DNS TTL information. For examples:
nscd on Unix and Linux caches the results of domain name lookups. But it doesn't respect the TTL information in the original DNS responses. Instead, TTLs are explicitly forced to specific values by the positive-time-to-live and negative-time-to-live directives in the nscd.conf configuration file.
Ironically, Microsoft's DNS Client service on Windows NT 4/2000/XP
does respect the TTL information in DNS responses.
The difference is that Microsoft's DNS Client service operates at the level
of the actual DNS transactions with the proxy DNS server, whereas
nscd operates at the level of
et al., which
can involve other sources of information apart from the DNS,
where the concept of a TTL is often meaningless.
Microsoft's Internet Explorer caches the results of domain name lookups internally, retaining the results of previous lookups for 30 minutes by default (which can be modified by adjusting the DnsCacheTimeout value of the HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings key in the Windows registry). This caching occurs separately within each Internet Explorer process, but an individual Internet Explorer process may be responsible for several web browser windows.
Netscape navigator/Mozilla caches the results of domain name lookups internally, retaining the results of previous lookups for 15 minutes. This caching occurs separately within each Nagivator/Mozilla process, but (alas!) Navigator and Mozilla go to great lengths to ensure that a single process is responsible for all web browser windows.
The squid proxy HTTP server caches the results of domain name lookups internally without respecting the actual DNS TTL information. Older versions can be modified to respect the DNS TTL information with a patch. Newer versions overwrite a zero TTL with whatever value is set for the positive_dns_ttl configuration variable (defaulting to 6 hours), and a TTL with a value of less than 1 minute with the value in negative_dns_ttl (defaulting to 1 minute).
There are three goals that people generally have in mind when they consider round-robin. None of the three goals is achieved by making content or proxy DNS servers rotate or randomise resource record sets.
This goal is achieved in one of two ways:
SRV resource records.
The point of re-ordering resource record sets is to attempt to use the order as
a hidden channel for carrying preference information for A or
AAAA resource record sets to service client applications. Of
course, as described earlier, there simply is no such hidden channel,
since order preservation is not guaranteed. In contrast, however,
SRV resource records have explicit support for carrying preference
Whether this is an effective mechanism depends from the application
service being provided, because in turn it depends from whether the
application client softwares use
Many application client softwares do, but some (to their embarrassment) do not.
There is one minor problem with
SRV resource records: They use the
same two-stage mapping scheme that is used in the DNS for delegation
information mappings and SMTP Relay server mappings, and thus require that each
service server have its own intermediate domain name.
Use a real load balancer. A real load balancer will distribute service requests at a single IP address to one or more back-end service servers, according to load.
The ISC's web page about load balancing and its own DNS server recommends an additional way of achieving this goal, namely of having a content DNS server that is aware of the load on each of the service server machines, by some private channel, and that orders the resource record sets that it publishes in ascending order of load. However, this is one point on which the ISC's web page is flawed. As described earlier, there is simply no guarantee that the ordering used by the content DNS server when it publishes a resource record set will even reach the service client applications, let alone affect the order in which they attempt to connect to the service server IP addresses.
One way that people attempt to work around this difficulty is to have the specialized content DNS server publish a resource record set comprising just the first address, selected, according to load, from the list of service server IP addresses. This variation on the scheme is often called "Global Server Load Balancing", and one can buy expensive systems to implement it. However, as Pete Tenereillo points out, it too causes problems, since by eliminating in this way the possibility of proxy DNS servers and DNS clients re-ordering the resource record set, one creates single-point failure modes for one's service and loses any hope of fault tolerance.
This goal is reached by having service clients that attempt to connect
to each address in turn. Such clients are not hard to write. It's not hard
to iterate over all of the addresses returned from
This goal is reached …
… by having service clients that sort service server addresses into proximity order. The sortlist directive in BIND's DNS Client library, used by many applications, can be used to do this (to an extent), for example.
(Note that service clients themselves are the only entities capable of doing this, because only they know the IP address from which they will be attempting to connect to the service server. Unfortunately, some proxy DNS server softwares and content DNS server softwares have proximity sorting features. These features are flawed, not only because they rely on the same erroneous notion that underpins round-robin (that the ordering used by the content DNS server when it publishes a resource record set will even reach the service client applications, let alone affect the order in which they attempt to connect to the service server IP addresses), but also because they operate upon the wrong IP address anyway. The IP address that a content DNS server sees is the IP address used by the back-end of a resolving proxy DNS server. The IP address that a resolving proxy DNS server is the IP address either of the back-end of a forwarding proxy DNS server or the IP address used by the DNS client library to communicate with it. Neither of those two addresses is, especially on machines with multiple NICs, necessarily the same as the IP address that the service client will use to connect to the service server. Proximity sorting based upon the former will not necessarily produce an optimal ordering for use with the latter.)
… by having the service servers listen on anycast IP addresses, so that IP traffic is automatically routed to the nearest service server in the set of servers distributed around Internet.