I've set up a content DNS server listening on a publically accessible IP address, to publish my DNS data, and populated its database with the correct data that I actually want published. But the registry/registrar is complaining about my content DNS server, showing me the output of some testing tools that it has run. What's all this about?
This is the Frequently Given Answer to that question. You can find an alternative hall of shame distributed across all of the "how to receive a delegation from …" pages on Dan Bernstein's djbdns site.
Foolish registries and (more usually) registrars perform all sorts of daft, pointless, and downright wrong tests on prospective content DNS servers before they will make delegations. Many of these tests are performed by rôte ("That's just what our testing tool was written to do."), based upon handed-down notions of best practice ("We were told, simply, that servers have to do this."), and with little or no actual understanding either of bailiwick or of what content DNS service actually is and how it differs from proxy DNS service
Here is the hall of shame listing all of the well-known unnecessary and often incorrect hoops that foolish registries and registrars make subdomain owners jump through.
Foolish registries/registrars send NS queries for . (the DNS namespace root) to subdomain content DNS servers, and whinge if they don't receive some root delegation information back. (Sometimes they even whinge if the root delegation isn't specifically that of ICANN's . content DNS servers.) They also send queries for localhost. and 188.8.131.52.in-addr.arpa. to subdomain content DNS servers, and whinge if they don't receive particular information back.
This bespeaks a fundamental lack of understanding of the DNS on their part, with respect to the proper separation of services, bailiwick, and the nature of delegation.
The subdomain's content DNS server doesn't need to know anything about the root or about localhost. and 184.108.40.206.in-addr.arpa. in order to operate. Foolish registries/registrars assume that everyone is running one of the old-style DNS server softwares that vainly attempt to wear all of the hats at once.
RFC 1912 § 4.1 recommends that all proxy DNS servers perform mini-"split horizon" DNS service for localhost. and 127.in-addr.arpa. (Actually it recommends 0.0.127.in-addr.arpa., but that is wrong.), effectively rendering those portions of the DNS namespace tree "host local" with all hosts having their own private data for them (albeit that for many hosts those data will be much the same). Employing "split horizon" DNS service on every host for these "host local" data is intended to prevent queries for such domain names from leaking out of a host, and impacting public content DNS servers.
For resolving proxy DNS service, a server needs to be told where the root servers are. All-of-the-hats-at-once DNS server softwares still need such a list, even if they have been configured (in accordance with best practice) to provide just content DNS service; and it just so happens that where an all-of-the-hats-at-once DNS server software is providing content DNS service it will publish its list of root servers, needed for the proxy DNS service that it isn't actually providing, to the rest of Internet.
But these are not true of content DNS service.
For performing content DNS service, a server need not know anything about the root. Content DNS servers don't talk to other DNS servers. They have no reason to know anything about other DNS servers (other than their database replication peers). New-style DNS server softwares, that implement content DNS service entirely separately from proxy DNS service, do not need to know anything about the root, let alone publish that information.
For performing content DNS service, a server need not know anything about localhost. and 220.127.116.11.in-addr.arpa.. The "split horizon" DNS service for the "host local" portions of the DNS namespace is handled by proxy DNS servers (and sometimes by DNS Clients directly), not by content DNS servers. New-style DNS server softwares, that implement content DNS service entirely separately from proxy DNS service, do not need to know anything about localhost. and 18.104.22.168.in-addr.arpa., let alone publish that information.
It is simply not necessary for the subdomain's content DNS server to publish anything at all, let alone anything meaningful, about the root or about localhost. and 127.in-addr.arpa.. No-one is going to ask it about them, it being, in the aspect that the registry/registrar is concerned with, a content DNS server for the subdomain being delegated. It's only going to be asked, by the rest of Internet, about the subdomain that the registry/registrar is arranging to delegate to it. That is, after all, how delegation in the DNS works. Content DNS servers can publish information about whatever domain names they like. But they'll only be asked about the domain names that are actually delegated to them.
It would only ever be asked about the root if someone else came along and decided to configure their resolving proxy DNS server to use it as a root server. However, in that case, whether it published information about the root and what information it published would be a matter for agreement between it and that other person, and not a matter for the foolish registry/registrar.
Moreover, even that much doesn't apply to localhost. and 127.in-addr.arpa.. There is no reason for anyone to decide to configure their resolving proxy DNS server to ask it about those domain names. Indeed, several proxy DNS server softwares simply cannot actually be configured to do that, since their treatment of those domain names is hardwired.
Requiring that particular root delegation information and particular information about localhost. and 22.214.171.124.in-addr.arpa. be published is tantamount to compulsory cache poisoning. A lot is asserted by a few people about how content DNS servers must not publish cache poison. Most of that is false, and based upon a lack of understanding of bailiwick. But, ironically, it is usually those same people who then proceed to also assert that content DNS servers must publish ICANN's root delegation information or must have particular mappings between the IP address 127.0.0.1 and locahost.. That is, ironically, requiring the very publication of poisonous data that they decry in other contexts. There's no way for a content DNS server to know what root information any given resolving proxy DNS server, querying it, may be using. Any root delegation information that it might publish is going to be cache poison as far as someone is concerned. And the "host local" portions of the DNS namespace are entirely private to a host. There's a lot of room in 127.0.0.0/8, and requiring particular database content is dictating things that no-one outside of a host has the right to dictate.
Fortunately, whatever the subdomain's content DNS server publishes about the root, about localhost., and about 126.96.36.199.in-addr.arpa., it won't be believed. That is, after all, how bailiwick in the DNS works. Content DNS servers can publish information about whatever domain names they like. But they'll only be believed about the domain names that are actually delegated to them.
The stricture against the publication of cache poison is wrong, but so too is the stricture about what root, localhost., and 188.8.131.52.in-addr.arpa. information may be published.
The basic mistake that the foolish registries/registrars are making is that of thinking that what a content DNS server publishes about domain names that are outside of its bailiwick actually matters. It does not. The contents of the global distributed DNS database are formed by combining only the in-bailiwick data published by all of the content DNS servers around the world. All other, out-of-bailiwick, data are simply irrelevant. Tests on what a content DNS server may say about out-of-bailiwick domain names are pointless.
Foolish registries/registrars send SOA queries for the delegation point to subdomain content DNS servers, study the fields of the resulting SOA resource records relating to DNS database replication, and whinge if they don't see values that they like.
Such foolish registries/registrars don't understand what the semantics of those fields actually are. They make wholly unjustified assumptions about the semantics of those fields (based upon wholly unjustified assumptions about what DNS database replication mechanism is in use), from which they derive wholly unjustified limitations on what those fields may contain.
Registries/registrars have no way to know what database replication mechanism is in use amongst the content DNS servers for the subdomain. That is an entirely private matter amongst the (administrators of the) coöoperating content DNS servers themselves. They thus have no way to know what values are reasonable in the fields, relating to DNS database replication, of subdomain SOA resource records.
Foolish registries/registrars whinge if the IP addresses of multiple content DNS servers within a single set have the first three octets in common.
This thinking is a hang over from the days when IP addresses had classes, from which one could easily deduce whether two IP addresses were on the same network. If two IP addresses differed only in their final octet, the deduction would be that they were on the same network, behind a common connection to the rest of Internet, and thus that the good practice, of not having a single point of failure (in this case, the shared connection to the rest of Internet) for multiple content DNS servers in a set, had not been followed.
Internet addressing has not worked this way for quite some time. The foolish registries/registrars need to drag themselves kicking and screaming into the 1990s (sic!). It's no longer possible to unequivocally state that two IP addresses that differ only in their final octets are on the same network.
The mistake that the foolish registries/registrars are making is in thinking that a simple comparison of octets in two IP addresses will tell them what is in fact only discoverable by checking an actual border gateway routing table.
Foolish registries/registrars whinge if content DNS/UDP service is provided but content DNS/TCP service is not.
The provision of DNS/TCP service is not actually a requirement, even though a few very vocal people claim that it is, or should be. RFC 1123 § 184.108.40.206 merely says that content DNS servers "should be able to service TCP queries". This is a recommendation, which there may be "valid reason in particular circumstances to ignore". And indeed there are. If the circumstances are that there is no reason for providing DNS/TCP service, then the fact that TCP services (in general) are open to denial-of-service attacks, in ways that UDP services are not, is a "valid reason in particular circumstances to ignore" that recommendation. Given that RFC 1123 § 220.127.116.11 also says that the back ends of resolving proxy DNS servers "must support UDP" and "must send a UDP query first", the circumstances in which there is no reason for providing DNS/TCP, and thus where the general weaknesses of TCP service become reasons to ignore the recommendation to provide it, are when
RFC 1123 § 18.104.22.168 also says that "truncation cannot be predicted, since it is data-dependent". The foolish registries/registrars, of course, have no access to the DNS databases of the content DNS servers. Thus what RFC 1123 § 22.214.171.124 says is true as far as the foolish registries/registrars are concerned. Moreover, the database replication mechanism that is in use is an entirely private matter amongst the coöoperating content DNS servers themselves. The foolish registries/registrars have no way of knowing whether both of the aforementioned conditions (for ignoring the recommendation to provide DNS/TCP service) actually hold.
But the administrator of the content DNS servers themselves does, and what RFC 1123 § 126.96.36.199 says does not hold for the administrator. The administrator can look at the contents of the DNS database and determine from them whether any response will incur truncation. (Put another way: The administrator can ensure that no data are added to the database in the first place that would cause truncation to occur.) The administrator is also privy to the DNS database replication arrangements, and knows whether the "zone transfer" database replication is in use or not. The administrator can thus (albeit not without some work in some circumstances) determine whether the conditions hold and thus whether there are reasons to ignore the recommendation to provide DNS/TCP service.
On the gripping hand, this still doesn't justify the foolish registrars/registries checking for the existence of content DNS/TCP service and whingeing if it is not provided. They are simply not in a position to determine whether that check is a valid one to be making.
Foolish registries/registrars whinge if "zone transfer" database replication is not available to certain IP addresses that they designate.
This check is wrong for several reasons:
For better or for worse, DNS administrators are leery of providing "zone transfer" service to others. They prevent "zone transfer" access in the name of "security". "If people cannot obtain a complete copy of my DNS database, then they cannot enumerate my publically reachable computers.", goes the thinking. There is an argument that this is simply the old "security through obscurity" fallacy, and to a huge extent it is, especially given that "NXT walking" provides the very same information, that "zone transfer" would provide, also in the name of "security". Nonetheless, DNS administrators do not want to provide "zone transfer" access to others.
(For some DNS server softwares, a better "security" argument against the provision of "zone transfer" service to others might be that "zone transfer" write-locks the server's DNS database for the duration of the transaction, preventing any updates to it, from Dynamic DNS updates or from ordinary maintenance by an administrator. This hands a denial-of-update weapon to anyone with access to "zone transfer" service, since they can simply perform very slow "zone transfers" repeatedly.)
"Zone transfer" database replication may not in fact be the replication mechanism that is actually being employed. The database replication mechanism that is in use amongst a set of coöoperating content DNS servers is an entirely private matter for (the administrators of) those servers. It doesn't have to actually be "zone transfer" at all. That's not the only DNS database replication mechanism by a long chalk. There is a wide range of database replication mechanisms that could be in use, from Active Directory integration to rsync. Requiring that the "zone transfer" mechanism be set up solely for their benefit is both arrogant and presumptuous on the parts of the foolish registrars/registries.
Providing "zone transfer" service might necessitate the provision of DNS/TCP service where otherwise it would be unnecessary. The provision of DNS/TCP service is not actually a requirement. Foolish registrars/registries requiring that the "zone transfer" mechanism be set up solely for their benefit are imposing an undue requirement upon DNS administrators, who would otherwise not need to provide DNS/TCP service, and who would otherwise be able to avoid the extra work and extra risk inherent in the provision of a TCP service.
The "need" for "zone transfer" is sometimes an entirely bogus one caused by the use of broken tools. One "checking" tool that foolish registrars/registries employed needed "zone transfer" access simply because it was badly designed. Rather than sending ordinary queries, in the normal manner, to the content DNS servers being checked, it would use "zone transfer" to obtain a copy of all of the DNS data, which it would then check locally. Fortunately, the tool was rewritten to not be so sloppy. Unfortunately, outdated and downright wrong checking tools continue to circulate amongst those who don't know any better (as the continued existence and use of dnstracer proves), including foolish registries/registrars.
The "need" for "zone transfer" is sometimes an entirely bogus one caused by the false belief that the subdomain owner wants the registry/registrary to provide DNS hosting services in addition to delegation. Some foolish registries/registrars simply cannot comprehend that subdomain owners might not want them to provide DNS hosting service in addition to delegation. They set up their servers to perform "zone transfer" database replication of the subdomain's DNS data because, to them, it is "obvious" that subdomain owners want their servers to have "slave" copies of the "zones".
This is, of course, not true. Subdomain owners do not necessarily want that. And BCP 0010 § 2.5 is a good indication that superdomain owners that are serious about what they are doing don't want that, either.
The "zone transfer" may not even work. The schema used by "zone transfer" is pretty minimal, and does not match any of the database schemata used by any of the mainstream DNS server softwares. It is highly likely that "zone transfer" will lose information (such as the time-to-die values included in the database schema of Dan Bernstein's djbdns, or the $GENERATE information included in the database schema of ISC's BIND, or the update timestamp and ACL information included in the schema of one of the databases of Microsoft's DNS server). It may even fail entirely. (Some "zone transfer" clients are, alas!, incapable of dealing with all types of resource record sets.)
The basic mistake that the foolish registries/registrars are making is in thinking that "zone transfer" service is an integral part of content DNS service and that they have some sort of right to privileged access to the subdomain's DNS database. It isn't, and they do not. "Zone transfer" service is an extension to the core DNS protocol; and is only a database replication mechanism (not even a very good one, at that), not anything to do with content DNS service proper. Database replication is an entirely private matter for (the administrators of) a set of coöoperating content DNS servers, and not an automatic right to outsiders. And "zone transfer" database replication is just one of many database replication mechanisms, which is falling in popularity with the advent of modern DNS servers with more flexible databases.
Foolish registries/registrars whinge if, when asked to delegate a subdomain, no mail service for the <Postmaster@…> mailbox at that subdomain is provided.
This is just blatantly silly. Not all domain names are there for the purposes of electronic mail. Indeed, many aren't. (Just ask the owners of altavista.com., a well known "no electronic mail here" domain, for example.) Requiring that a subdomain have mail service is just daft.
It's also counterproductive. It requires that the administrators of "no electronic mail" domains set up dummy SMTP Relay servers solely for the benefit of appeasing the foolish registry/registrar, and prevents them from adopting best practice. Best practice, of course, is that if the intent is to provide no electronic mail service to Internet, one should not have a (potentially exploitable) SMTP Relay server at all.
Foolish registries/registrars perform a very bizarre check, and of course whinge if it fails. They look at the intermediate domain names, that form the halfway point in the delegation information for the domain that it is intended to delegate, strip off the first label, ask the content DNS servers being delegated to for the NS resource record sets for the resulting names, and then check that those resource record sets comprise the same intermediate domain names that they started with.
One can almost see the logic in this, as long as one adopts the following erroneous premise:
For every subdomain example.it., the intermediate domain names for the content DNS servers will be of the form ns1.example.it., ns2.example.it., and so forth.
Given that premise, what happens when the check is performed is that the foolish registry/registrar looks at the proposed delegation information for example.it., sees the intermediate domain names ns1.example.it. and ns2.example.it., strips off their first label to yield example.it., looks up the NS resource record set for example.it., and finds ns1.example.it. and ns2.example.it. again.
But, of course, that premise is erroneous.
It's erroneous for people that follow good practice for making delegations.
Good practice for making delegations is for the intermediate domain names being used to be subdomains of the enclosing superdomain. For example: When delegating example.it., the intermediate domain names for the content DNS servers would be subdomains of it..
But that means that the registrars'/registries' premise is not necessarily true when good practice is followed. For example: The intermediate domain names in the delegation information for example.it. could be ns.first-hosting-company.it. and ns.second-hosting-company.it.. When the foolish registries/registrars perform their bizarre check, they strip the first label off, look up the NS resource record sets for first-hosting-company.it. and second-hosting-company.it., and find that (because the hosting companies are unrelated to each other) in neither case do they get back the same intermediate domain names that they originally started with.
It's even erroneous for people that follow best practice for making delegations.
Best practice for making delegations is for the intermediate domain names being used to be subdomains of the subdomain itself. For example: When delegating example.it., the intermediate domain names for the content DNS servers would be subdomains of example.it. itself. (The ns1.example.it. and ns2.example.it. pattern thus follows best practice.)
However, best practice does not require that the intermediate domain names of the content DNS servers be immediate subdomains of the subdomain itself; and there are advantages to be had, in terms of reducing namespace collisions and maximising the potential for compression of DNS datagrams, from putting all of the intermediate domain names underneath a single subdomain of the subdomain. For example: When delegating example.it., the intermediate domain names for the content DNS servers would be subdomains of ns.example.it.. (A commonly used pattern is a.ns.example.it., b.ns.example.it., and so forth. This keeps all of the intermediate domain names in their own special area, and, since the domain names differ from one another in only the first, single-letter, label, maximises the compression potential of DNS datagrams.)
That means that the registrars'/registries' premise is not necessarily true even when good practice is followed. When the foolish registries/registrars perform their bizarre check, they strip the first label off, look up the NS resource record set for ns.example.it., find it to be empty (since that is not a delegation point in the DNS namespace tree, merely an ordinary subdomain), and thus do not get back the original intermediate domain names that they started with.
It's also erroneous for people that follow bad practice for making delegations.
If the intermediate domain names in the delegation information for example.it. were ns.example.com. and ns.example.net., the registries'/registrars' bizarre check would fail yet again. (This is mentioned merely for the sake of completeness.)
There is simply no justification at all for this bizarre check. It is based upon a very narrowminded notion, that is simply false in practice, of how delegations are constructed.
Here is a hall of shame listing some of the hoops that registries and registrars do not make subdomain owners jump through.
Registries/registrars don't check to see whether content DNS servers respond to EDNS0 queries. But doing so would vastly improve the lot of resolving proxy DNS servers that are capable of using EDNS0.
Currently, most DNS lookups involve public content DNS servers that don't actually support EDNS0. Such servers will fail in one of a (quite large) number of ways when sent EDNS0 queries. (These can be somewhat bizarre. One DNS server software sends back "bad format" responses with empty "question" sections, for example.) Where a resolving proxy DNS server is configured to use EDNS0, most of its back-end queries involve two DNS/UDP transactions, the first attempting to use EDNS0, either receiving an error response or timing out after receiving no response, and the second not. In contrast, when configured to not use ENDS0, there is just one DNS/UDP transaction in those cases. The use of EDNS0 yields a gain from the loss of the DNS/TCP setup/teardown overhead in the subset of cases where TCP fallback would otherwise be used. But this is vastly diminished by the loss incurred by the concomitant increase in DNS/UDP traffic, and the extra timeouts talking to the the non-responsive-to-ENDS0 servers, for pretty much all lookups across the board.
The irony of this is that if content DNS servers implemented EDNS0 (even if only to the extents of recognising and parsing EDNS0 queries properly, and advertising 512 octet DNS/UDP datagram sizes), the current situation would be much improved.
Checks that content DNS servers responded non-erroneously to EDNS0 queries (not that they support extensions, such as large UDP datagram sizes, but merely that they do not yield error responses, or simply fail to respond at all, when sent EDNS0 queries) is, ironically, one of the generally beneficial checks that registries/registrars don't make.