Tag: DNS

ISC Bind 9.18 and Windows DNS

After upgrading all of our Linux hosts to Fedora 39, we are running ISC bind 9.18.21 … and it seems the ISC folks are finally done with Microsoft’s “kinda sorta RFC compliance”. Instead of just working around Windows DNS servers having some quirks … they now fail to AXFR the domain.

Fortunately, you can tell bind to stop doing edns ‘stuff‘ by adding a server{} section to named.conf — this gives the server some instructions on how to communicate with the listed server. When bind is no longer trying to do edns “stuff”, Windows doesn’t have an opportunity to provide a bad response, so the AXFR doesn’t fail.

Web Proxy Auto Discovery (WPAD) DNS Failure

I wanted to set up automatic proxy discovery on our home network — but it just didn’t work. The website is there, it looks fine … but it doesn’t work. Turns out Microsoft introduced some security idea in Windows 2008 that prevents Windows DNS servers from serving specific names. They “banned” Web Proxy Auto Discovery (WPAD) and Intra-site Automatic Tunnel Addressing Protocol (ISATAP). Even if you’ve got a valid wpad.example.com host recorded in your domain, Windows DNS server says “Nope, no such thing!”. I guess I can appreciate the logic — some malicious actor can hijack all of your connections by tunnelling or proxying your traffic. But … doesn’t the fact I bothered to manually create a hostname kind of clue you into the fact I am trying to do this?!?

I gave up and added the proxy config to my group policy — a few computers, then, needed to be manually configured. It worked. Looking in the event log for a completely different problem, I saw the following entry:

Event ID 6268

The global query block list is a feature that prevents attacks on your  network by blocking DNS queries for specific host names. This feature has caused the DNS server to fail a query with error code NAME ERROR for wpad.example.com. even though data for this DNS name exists in the DNS database. Other queries in all locally authoritative zones for other names
that begin with labels in the block list will also fail, but no event will be logged when further queries are blocked until the DNS server service on this computer is restarted. See product documentation for information about this feature and instructions on how to configure it.

The oddest bit is that this appears to be a substring ‘starts with’ query — like wpadlet or wpadding would also fail? A quick search produced documentation on this Global Query Blocklist … and two quick ways to resolve the issue.

(1) Change the block list to contain only the services you don’t want to use. I don’t use ISATAP, so blocking isatap* hostnames isn’t problematic:

dnscmd /config /globalqueryblocklist isatap

View the current blocklist with:

dnscmd /info /globalqueryblocklist

– Or –

(2) Disable the block list — more risk, but it avoids having to figure this all out again in a few years when a hostname starting with isatap doesn’t work for no reason!

dnscmd /config /enableglobalqueryblocklist 0


Linux: Disabling Wild Local DNS Server Thing (i.e. systemd-resolved)

I am certain there is some way to configure systemd-resolved to actually use internal DNS servers so you can resolve your local hostnames. But nothing I’ve tried have worked, and I don’t actually need this wild local DNS thing.

Here’s the problem — systemd-resolved creates an /etc/resolv.conf file that uses a localhost address as the nameserver — and that may very well forward requests out to Internet DNS servers. Which don’t have any clue about your internal DNS zones — thus you can no longer resolve local hostnames. Whenever I see in /etc/resolv.conf, I know systemd-resolved is at work.

[lisa@linux ~]# cat /etc/resolv.conf
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
# This file might be symlinked as /etc/resolv.conf. If you're looking at
# /etc/resolv.conf and seeing this text, you have followed the symlink.
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

options edns0 trust-ad
search example.com

To disable this local name resolution, stop and disable systemd-resolved, unlink the /etc/resolv.conf file it created, and restart NetworkManager

[lisa@linux ~]# systemctl stop systemd-resolved.service
[lisa@linux ~]# systemctl disable systemd-resolved.service
[lisa@linux ~]# unlink /etc/resolv.conf
[lisa@linux ~]# systemctl restart NetworkManager
Removed /etc/systemd/system/multi-user.target.wants/systemd-resolved.service.
Removed /etc/systemd/system/dbus-org.freedesktop.resolve1.service.

Voila, /etc/resolv.conf is now populated with reasonable internal DNS servers, and you can resolve local hostnames.

[lisa@linux ~]# cat /etc/resolv.conf
# Generated by NetworkManager
search example.com

Maintaining an /etc/hosts record

I encountered an oddity at work — there’s a server on an internally located public IP space. Because it’s public space, it is not allowed to communicate with the internal interface of some of our security group’s servers. It has to use their public interface (not technically, just a policy on which they will not budge). I cannot just use a DNS server that resolves the public copy of our zone because then we’d lose access to everything else, so we are stuck making an /etc/hosts entry. Except this thing changes IPs fairly regularly (hey, we’re moving from AWS to Azure; hey, let’s try CloudFlare; nope, that is expensive so change it back) and the service it provides is application authentication so not something you want randomly falling over every couple of months.

So I’ve come up with a quick script to maintain the /etc/hosts record for the endpoint.

# requires: dnspython, subprocess

import dns.resolver
import subprocess

strHostToCheck = 'hostname.example.com' # PingID endpoint for authentication
strDNSServer = ""         # Google's public DNS server
listStrIPs = []

# Get current assignement from hosts file
listCurrentAssignment = [ line for line in open('/etc/hosts') if strHostToCheck in line]

if len(listCurrentAssignment) >= 1:
        strCurrentAssignment = listCurrentAssignment[0].split("\t")[0]

        # Get actual assignment from DNS
        objResolver = dns.resolver.Resolver()
        objResolver.nameservers = [strDNSServer]
        objHostResolution = objResolver.query(strHostToCheck)

        for objARecord in objHostResolution:

        if len(listStrIPs) >= 1:
                # Fix /etc/hosts if the assignment there doesn't match DNS
                if strCurrentAssignment in listStrIPs:
                        print(f"Nothing to do -- hosts file record {strCurrentAssignment} is in {listStrIPs}")
                        print(f"I do not find {strCurrentAssignment} here, so now fix it!")
                        subprocess.call([f"sed -i -e 's/{strCurrentAssignment}\t{strHostToCheck}/{listStrIPs[0]}\t{strHostToCheck}/g' /etc/hosts"], shell=True)
                print("No resolution from DNS ... that's not great")
        print("No assignment found in /etc/hosts ... that's not great either")

ISC Bind – Converting Secondary Zone to Primary

Our power went out on Monday and, unfortunately, the SSD on the server with all of our VMs got corrupted. The main server has ISC Bind configured to host all of our internal DNS zones as secondaries … but, a day after the primary DNS server went down, those copies fell over. Luckily, you can convert a secondary zone to primary. The problem is that the cached copy of the zone was … funky binary stuff.

Luckily there’s an executable to convert this into a text zone file — named-compilezone

-f raw -F text -o output_file_name zone_name input_file_name

So, to covert my rushworth.us zone:

named-compilezone -f raw -F text -o rushworth.us.db rushworth.us rushworth.us.db.bin

Then, in the named.conf file, change the zone type to “master” and remark out the line indicating which the masters are. Change the “files” line to the newly created file. If you haven’t already done so, add “allow-query {any; };” so clients can actually query the zone.

Porkbun DDNS API

I’ve been working on a script that updates our host names in Porkbun, but the script had a problem with the example.com type A records. Updating host.example.com worked fine, but example.com became example.com.example.com

Now, in a Bind zone, you just fully qualify the record by post-pending the implied root dot (i.e. instead of “example.com”, you use “example.com.”, but Porkbun didn’t understand a fully qualified record. You cannot say the name is null (or “”). You cannot say the name is “example.com” or “example.com.”

In what I hope is my final iteration of the script, I now identify cases where the name matches the zone and don’t include the name parameter in the JSON data. Otherwise I include the ‘name’ as the short hostname (i.e. the fully qualified hostname minus the zone name). This appears to be working properly, so (fingers crossed, knock on wood, and all that) our ‘stuff’ won’t go offline next time our IP address changes.

It’s Not A DNS Problem

I used to work at a company where everything was called an Exchange problem — not that Exchange 2000 didn’t have it’s share of problems (store.exe silent exit failures? Yes, that’s absolutely an Exchange problem) … but the majority of the time, the site had lost their connectivity back to the corporate data center. Or, when I’d see the network guys sprinting down the hallway as the first calls started to come in … the corporate data center had some sort of meltdown.

I’m reminded of this as I see people calling the Facebook outage a “DNS problem”. Facebook’s networks dropped out of BGP routing. That means there’s no route to their DNS server, so you get a resolution failure. It doesn’t mean there’s a DNS problem. Any more than it means there’s an IP or power problem — I’m sure it’s all working as designed and either someone screwed up a config change or someone didn’t screw up and was trying to drop them off the Internet.

Saw much the same thing back when Egypt dropped off of the Internet back in 2011 — their routes were withdrawn from the routing tables. That’s an initiated process — maybe accidental, but it’s not the same as a bunch of devices losing power or a huge fiber cut.

And, when there’s no route you can use to get there … if DNS, web servers, databases, etc are working or not becomes moot.

Testing A New Web Server Without DNS Changes

When migrating to a new server, it’s good to validate site functionality before redirecting users to the new host. i.e. I have anya.rushworth.us set up in the httpd config on both server1 and server2. DNS currently points traffic to server1, but I need to test the site on server2.

Approach #1 – With administrative access to the host

Edit your hosts file – open an administrative command prompt

Edit %SYSTEMROOT%\system32\drivers\etc\hosts and add lines with the IP address WHITESPACE and the hostname(s). E.G. lisatest lisatest.rushworth.us lisatest2 lisatest2.rushworth.us otherhost otherhost.rushworth.us anya anya.rushworth.us

Clear your DNS cache (ipconfig /flushdns) and navigate to the URL. You’ll be directed the IP address from your hosts file instead of the DNS registered address.

Approach #2 – No admin access

Install ModHeader in your Chrome browser and click the extension to modify the headers or install ModHeader in your Firefox browser. Click on the extension icon to set a header value.

Add a “Host” header with the value of the virtual host name you need to test

Navigate to the hostname of the new server – https://server2.rushworth.us – but the web server will receive the Host header you configured in ModHeader and serve the web site based on that host header.


Response Policy Zone (RPZ)

Years ago, Paul Vixie developed a component of the BIND DNS server that allowed server owners to easily override specific hostnames. We had done something similar for particularly bad hostnames — if your workstations use your DNS servers, you just have to declare yourself the name server for a domain that has the same name as the hostname you want to block (i.e. I become the NS record for forbidden.google.com and my clients are able to resolve all other records within the google.com zone, but when they resolve forbidden.google.com … they get whatever I provide). I usually did this to route traffic over a B2B VPN – provided the private IP address instead of the public IP provided by the domain owner’s name servers. But for a few really bad malware variants, I overrode their hostname. Problem was the technique wasn’t exactly easy. Every single host required a new DNS zone be created, configured on your DNS servers, and (at least in BIND) the service restarted.

Response Policy Zone was pushed as a functionality that would allow service providers (ISPs). That’s not a use case I forsee (it’s a lot of manual work), but it has become an important component of our company’s network security. Hosting an RPZ domain allows us to easily add new overrides for B2B VPN connected hosts. But it also means we can override hostnames that appear in phishing e-mail campaigns, malware hosts, infected web sites … basically anything we don’t want employees accessing.

Stopping clients from accessing infected sites is a great thing; but for hostnames that are indicative of a compromised box (i.e. there’s a difference between an employee clicking on a link within their e-mail that links them to a specific host and someone having malware on their box that automatically contacts a specific host), we set the IP address for the hostname to a honeypot.

The honeypot is bound to all unused ports on the host (there aren’t a lot of used ports on it), logs all contact to a database, then basically hangs the connection. We have a scheduled job that looks at the contact log and opens a ticket to the desktop support team to investigate the compromised host.

Microsoft Directories – NT and Windows 2000/2003/2008/2012

Windows NT provided a limited repository for user id’s and passwords.  NT domain credentials had the advantage of providing single-sign-on access to other Microsoft resources such as file shares and Exchange.  Exchange itself housed a secondary directory, used for the “global address list” type details for Exchange accounts.  Address, phone number, manager, email addresses … basically anything other than the user’s ID and password were stored within the Exchange directory.  The Exchange directory then linked each account into an NT4 domain user account for logon credentials.

With Windows 2000, Microsoft integrated the two directories into Active Directory.  This allowed a more robust set of user details to be provided – and moved the LDAP compliant directory off the Exchange server onto the domain controllers.  Major changes were introduced in Active Directory – an increased maximum object count (from 40,000 to ten million in a single domain with a billions of objects in an AD forest), multi-master architecture, and attribute level replication being some of the key changes.

Data Store

Active Directory data is stored in ntds.dit.  ESE (extensible storage engine) is used to access the data within the database.  In addition to ntds.dit, there are several peripheral database files – edb.log is the current in-use transaction log file.  EDB#####.log may be present if the edb.log file has been filled.  EDB.CHK is the checkpoint file – this keeps track of which transactions have been committed to ntds.dit and a crash of the system will cause the transaction logs to be replayed from the pointer referenced in the chk file.  Res1.log and res2.log, ten meg in total, are placeholder files just in case should the server run out of disk space the files are removed to allow continued operation.

Within NTDS.DIT there are two main tables:

  • The link table – metadata for calculating linked values
  • The data table – actual domain data

There are four other tables about which no additional information will be provided here-in

  • System Table – metadata for the DSA-defined tables and indices
  • HiddenTable – DSA metadata
  • SDPropTable – Transiently stores Security Descriptor propagation, records are removed from table as propagation completes
  • MSysDefrag1 – ESE database table, not specific to AD

For linked attributes, the backlinked attribute is not modified directly but rather determined when it is queried.  As an example – Active Directory generates a reporting structure.  An object has a manager, but the “reports” listing is calculated based on object’s managers.  The linkID of a forward link attribute is always even and it’s associated backlinked attribute is always the forward linkID plus one (consequently also always odd).  A full list of forward/back link pairs can be generated by looking at the linkID values.

The data table contains three different naming contexts – the schema, the configuration, and the domain data.  These correspond to the three partitions shown in REPLMON – “cn=schema,cn=configuration,dc=windstream,dc=com”, “cn=configuration,dc=windstream,dc=com”, and “dc=windstream,dc=com”.  The term partition in Active Directory is used to indicate a naming context – in no way related to Novell’s use of the term to indicate a replication boundry.

The schema and configuration partitions are replicated to all domain controllers in a forest – since we only have one tree in the forest rendering the point moot since all the domain controllers in the domain are also all the domain controllers in the forest.  The domain partition is replicated to all domain controllers in the domain.


Active Directory – Schema

Microsoft’s documentation on unmodified schema classes and attributes can be found at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adschema/adschema/active_directory_schema.asp   The modifications Exchange makes to the AD schema can be found at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/wss/wss/wss_ldf_AD_Schema_intro.asp

The schema management MMC is not automatically available on a Windows machine.  To enable the snap-in, run regsvr32 c:\winnt\system32\schmmgmt.dll – then “Active Directory Schema” will be an option when adding snap-ins to MMC

Active Directory’s schema is normally in a read-only mode and no user has rights to modify the schema.  Prior to enacting a schema change, then, you must enable schema writes and add your account to the “Schema Admins” group.  To enable schema writes, right click on the “Active Directory Schema” item in the MMC and select “Operation Master”.  Then check the box next to “The Schema may be modified on this domain controller”

When creating new schema classes or attributes, ensure you use the correct OID for our organisation.  Preferably, too, create auxiliary classes and associate the aux class with a structural class.  This prevents any vendor changes to the structural class from impacting your schema attributes.

In AD, schema changes cannot be deleted (well, it can but the process is unsupported).  An attribute can be deactivated, but it remains in the schema definition.

Active Directory – Configuration

The AD Configuration partition holds, as the name implies, configuration for the domain and some services within the domain.

  • Display Specifiers: Under the DisplaySpecifiers CN you will see multiple three digit hex number combinations.  These are codes for different languages – 409 being English.  http://www.microsoft.com/globaldev/reference/win2k/setup/lcid.mspx lists the codes used within the Windows internationalisation features.  Under each regional container you will find the actual display specifier for structural schema objects.  On, for instance, the user-Display object, is defined what appears when you right-click a user object in Active Directory Users and Computers.  Another attribute defines the pages which appear when you create a user and the order in which those pages appear.  The createDialog attribute is of particular interest – we modify this to automatically create the display name as lastname, firstname MI if you manually create a user within AD.  This is done by defining the createDialog value as “%<sn>, %<givenName> %<initials>”
  • Extended Rights:  On the controlRightsAccess object, appliesTo defines structural schema objects to which the controlRightsAccess object applies.  The controlRightsAccess objects themselves have several functions.
    • When validAccesses is set to 8, this is to validate writes – or check the attribute value beyond the schema definition.  Implementation is not widespread.
    • When validAccesses is 256, then the object defines an actual extended right – something not part of the normal ACL’s.  Recieve-as and Send-As, for instance, are a special operations for Exchange which can be found in the ExtendedRights container.
    • Other validAccesses codes define ACL groups which can be assigned through the “Delegate Control” function.  and validAccesses indicates what rights the ACL group permits – 16 for read, 32 for write, and the sum of 48 for read/write access.  The membership object in ExtendedRights, with appliesTo bf967aba-0de6-11d0-a285-00aa003049e2 and validAccess of 48 means this access group allows whomever is granted it to both read and write to user objects (bf967aba-0de6-11d0-a285-00aa003049e2 is the guid of the user schema object).  On the schema object “member”, then, the rightsGUID is entered as the attributeSecurityGUID.

An example of the rights grouping is the “Personal-Information” object, rightsGUID 77B5B886-944A-11d1-AEBD-0000F80367C1.  You will find the corresponding octet string, 0x86 0xb8 0xb5 0x77 0x4a 0x94 0xd1 0x11 0xae 0xbd 0x00 0x00 0xf8 0x03 0x67 0xc1, applied to several schema attributes – telephoneNumber, facsimileTelephoneNumber, streetAddress, telexNumber, and so on:

Thus using the “Delegation Of Control” wizard, it is possible to select “Read and write Personal Information” as a permission set rather than specifying each individual attribute you want editable. Note, too, in the ACL editor the listing of “Personal Information” is retained


Under ForestUpdates you will see an “Operations” CN.  Operations holds a listing of updates made to the forest (e.g. exchange /forestprep).  This allows the system to check that the requisite forest updates are in place prior to installation without requiring the changes to be re-run.


This is basically the same thing “LostAndFound” in the domain naming context is, but within the configuration partition.  All things being equal, it should be empty.  Should an object be created within the Configuration partition at the same time it’s parent is deleted, the object is moved to “LostAndFoundConfig” for holding.


Contains crossref objects to all partitions within the forest – again not as interesting here as it could be with just one tree and domain.

Physical Locations

This is intended for use with Directory Enabled Networking.  The DEN concept is maintained by DMTF (http://www.dmtf.org/standards/wbem/den/) and is not at present implemented at Windstream


Forest-wide application settings – objects within this container correspond directly to the “Services” listed within the “Active Directory Sites And Services” snapin.  One of the services listed is Microsoft Exchange.  Should a server fail, running setup /disasterrecovery will recover most of the Exchange settings for the server from within this container.


The “Sites” of “Active Directory Sites and Services”.  IP subnets and their associated sites are defined in this container, as well as the replication partnerships between domain controllers.

WellKnown Security Principals

What I call the “virtual credentials” – system security credentials like Everyone and Self are defined herein.

Active Directory – Domain Data

Objects specific to just one domain within the forest – the obvious users, computers, printers, file shares, groups, and contacts.  Less obvious items too are stored within the domain data.  If Windows DNS zones are configured as “Active Directory Integrated”, the DNS entries will appear under “cn=MicrosoftDNS,cn=System,dc=…”.  File replication service (FRS) shares (including the domain SYSVOL), some information on Group Polices, Oracle database connections … any of the structural schema objects … can also be found within this partition.

An object named Infrastructure is in the root of the domain naming context, this object holds the NTDS settings for the domain infrastructure operations master.

Flexible Single Master Operations (FSMO) Roles

FSMO roles are assigned for functions which cannot practically be performed by any domain controller – functionality which cannot subscribe to the multi-master principal.

There are two forest-wide FSMO roles, the Domain Naming Master and the Schema Master.

  • The Schema Master is the server on which writes can be made to the schema.  All domain controllers will have a read-only copy of the schema, but only the schema master can write changes.
  • The Domain Naming Master is used when a new domain is created within a forest – it verifies the new domain has a unique name.

Three additional FSMO roles exist in each domain within a forest.  The Infrastructure Master, RID Master, and PDC Emulator.

  • The Infrastructure Master, in a multi-domain environment, handles cleanup of phantom objects created as members are added to groups via a trust.  The cleanup process is detailed by Microsoft at http://support.microsoft.com/default.aspx?scid=kb;EN-US;Q248047    As we have a single domain, this is somewhat immaterial.  Should we begin implementing other domains, the Infrastructure Master will need to be moved to a non-global catalogue (GC) server.  The GC functionality precludes the phantom objects from being created (and hence from being purged).
  • The RID master allocates blocks of relative ID’s, RID’s, to the domain controllers within the domain to ensure unique GUID’s.  Should the RID master be offline for a short interval, new objects can still be created until the already-allocated RID block has been exhausted.
  • The PDC emulator is multi-function.  Were the domain to be in mixed-mode and therefore support NT4 BDC’s, the PDC emulator is required by the NT4 domain controllers for backwards compatibility.  Our domain is in native-mode and cannot have NT4 BDC’s.  This does not preclude NT4 member servers, just domain controllers.  The PDC Emulator server is authoritative for the user’s password.  Any failed logons are re-checked against the PDC emulator.  In the NT4 environment this was because a BDC was a read-only directory copy to which password changes could not be made.  If you changed your password and attempted to authenticate prior to the domain replicating the change completely, you could receive an invalid password error using your correct new password.  To prevent this issue, a password failure on the BDC was re-checked with the PDC before the logon attempt was failed.  This is how we can allow CSO password changes into AD without requiring the user to wait for domain synchronisation.  The DirXML AD driver is installed to the PDC emulator server to allow immediate use of the user’s new CSO password.  Group Policy Objects are created and edited on the PDC Emulator’s SYSVOL share.  The PDC emulator is also the time source for the domain.  Our PDC emulator is configured to use time.windstream.com as its time source with a time sync period of eight hours.

Normally you can move the FSMO roles between domain controllers using MMC’s.  For the three per-domain roles the change is made in Active Directory Users and Computers.  The Domain Naming Operation Master is changed from Active Directory Domains and Trusts”; the Schema master is changed within Active Directory Schema Manager.

Within Active Directory Users and Computers, right click the domain and select “Connect To Domain Controller” – select the domain controller which will receive the new role.  Then right click the domain and select “Operation Masters”.  You just click the “Change” button to move the role.

In the event of a catastrophic server failure complete with no system state backups you can forcibly transfer the FSMO roles from a non-operational source.  We have done this once in production, the ICM domain, but there is additional peripheral cleanup required to remove the failed domain controller from operation.  http://support.microsoft.com/kb/255504/ contains instructions for seizing FSMO roles.  Microsoft mostly documents the domain cleanup process at http://support.microsoft.com/kb/216498/   If you want to try it for the experience, build two servers, create a fake domain with the two of them, turn one off and seize all the roles onto the remaining machine.  This is effectively what happened in the ICM domain, they had three domain controllers but the first which held all roles was destroyed.  Be careful in production as the post-seizure cleanup is not fully documented.  DNS entries will still exist in BIND.  It is possible for your domain controller machine password to be out of sync with the domain.  I’m sure there are other situations which could arise as well which we didn’t happen across.

Domain Registration – WINS

The WINS entries of your domain should only be used by ‘legacy’ clients, NT4 workstations and servers.  If you configure your domain controllers TCP/IP properties to use your WINS servers, the registration for the domain will be created automatically.  Alternately you can create an LMHOST file for import into a foreign WINS server.  The only reason we do this is to establish a trust with an NT4 domain.  There are two records needed – and for our domain the text is included here-in.       SCARLITNT631         #DOM:ALLTEL          #PRE       "ALLTEL         \0x1b"  #PRE

If you are attempting to create the LMHOST file for an alternate domain, you can just change the values except the “ALLTEL         \0x1b” entry.  There is a quotation mark, sixteen characters only (insert holy hand grenade like joke here) followed by the \0x1b then the closing quotation mark.  If your domain name is BOB you cannot replace ALLTEL with BOB, you need to replace it with BOB and three trailing space characters.

Domain Registration – DNS

There are four “underscore zones” – new DNS zones used to store the SRV records relevant to the domain. Active Directory works fine with BIND DNS servers – you need to allow dynamic updates from the domain controller IP addresses. Since I do not allow dynamic updates on the root zone, I manually add the domain controller A records.

  • _sites.domain.tld.   Service records advertise servers providing global catalogue, Kerberos, and LDAP services within each site.  The sites are differentiated within the record name – _service._tcp.SITENAME._sites.domain.tld.    The following lines are the _sites records for the TWNUserAuth site
$ORIGIN _tcp.TWNUserAuth._sites.alltel.com.
_gc                               SRV     0 100   3268    neohtwnnt630.alltel.com.
_kerberos                         SRV     0 100   88      neohtwnnt630.alltel.com.
_ldap                             SRV     0 100   389     neohtwnnt630.alltel.com.
_gc                               SRV     0 100   3268    neohtwnnt631.alltel.com.
_kerberos                         SRV     0 100   88      neohtwnnt631.alltel.com.
_ldap                            SRV      0 100   389     neohtwnnt631.alltel.com.
  • _tcp.domain.tld. Service records advertise all domain controllers within the domain providing global catalogue, Kerberos, LDAP, and kpasswd services.  The following lines are the _tcp records for the NEOHTWNNT630 server
$ORIGIN _tcp.alltel.com.
_gc                           SRV     0 100   3268    neohtwnnt630.alltel.com.
_kerberos                     SRV     0 100   88      neohtwnnt630.alltel.com.
_kpasswd                      SRV     0 100   464     neohtwnnt630.alltel.com.
_ldap                         SRV     0 100   389     neohtwnnt630.alltel.com.
  • _udp.domain.tld. Used for UDP kerberos connections to get tickets and change passwords.  Service records in this zone advertise the UDP Kerberos and kpasswd services for the domain.  The following lines are the _udp records for the NEOHTWNNT630 server
$ORIGIN _udp.alltel.com.
_kerberos                   SRV     0 100   88       neohtwnnt630.alltel.com.
_kpasswd                    SRV     0 100   464      neohtwnnt630.alltel.com.
  • _msdcs.domain.tld.  Kerberos, ldap, and global catalogue records by site and not.  In addition each domain controller’s GUID used for replication is registered here.  Again the example provides the service records for NEOHTWNNT630
$ORIGIN _msdcs.alltel.com.
47c1965e-87e8-4445-8552-fd20892c08c2    CNAME          neohtwnnt630.alltel.com.
_ldap._tcp.e0f0a709-9edf-483b-96e6-55c0dd55c1a6.domains           SRV 0 100 389             neohtwnnt630.alltel.com.
gc                      A
$ORIGIN _tcp.dc._msdcs.alltel.com.
_kerberos                                 SRV     0 100   88        neohtwnnt630.alltel.com.
_ldap                                     SRV     0 100   389       neohtwnnt630.alltel.com.
$ORIGIN _tcp.TWNUserAuth._sites.dc._msdcs.alltel.com.
_kerberos                                 SRV     0 100   88        neohtwnnt630.alltel.com.
_ldap                                     SRV     0 100   389       neohtwnnt630.alltel.com.
$ORIGIN gc._msdcs.alltel.com.
_ldap._tcp                                SRV     0 100   3268    neohtwnnt630.alltel.com.
$ORIGIN _sites.gc._msdcs.alltel.com.
_ldap._tcp.TWNUserAuth                    SRV     0 100   3268    neohtwnnt630.alltel.com.

The PDC emulator is also advertised here

$ORIGIN _msdcs.alltel.com.
_ldap._tcp.pdc                         SRV     0 100   389      scarlitnt631.alltel.com.


Client Authentication

A client which has already authenticated to the domain will have a registry entry which retains the client’s site.


When a client attempts to authenticate to Active Directory, the service records for the kerberos service are used to determine an appropriate authentication source.  In the case of the PC above, this would be a query for _kerberos._tcp.LITUserAuth._sites.dc._msdcs.alltel.com. service records is made.  An LDAP connection is initiated over udp/389 to every domain controller returned by the DNS query.  Each connection is initiated in 1/10th intervals second.  The receiving servers compare the client’s IP address to the subnet configuration to verify the client is reaching the correct site for it’s current subnet.  The first LDAP response received is then used as the kerberosauthentication server.  If the client’s site is incorrect a referral is returned for the correct site – which then prompt the client to re-query DNS for the correct new site.