This is a rapid post where I will share one of my last experience during a customer call for advice.
The customer have several branch offices (Tens). In each site, a ‘big’ server is deployed where several Virtual Machines are running to provide ‘vital’ infrastructure like :
- Active Directory Domain Controller (RODC) + DHCP + DNS + Printer services
- File Server
- SCCM Distribution point
The question was arisen when we studied some DR and Service continuity scenarios : The branch offices workloads were under the scope, but the priority was very low, and the question was : How can I minimally protect the branch offices data with 0 investment ?
This is wasn’t a very difficult question, and the answers were like the following :
- AD + DNS + DHCP + Printer Services :
- AD services : When the RODC is not reachable, clients automatically contacts the primary domain controllers on the main site (Through S2S VPN or MPLS). This is a built-in AD service –> Solved
- DNS : The secondary DNS servers configured via DHCP are the main site DNS servers —> Solved
- DHCP : This is a vital service, without DHCP, clients will not obtain IP addresses and will not be able to work. The solution was to configure (since Windows Server 2012) a Hot-Standby failover relation ship with the main site. The branch-offices network device must only support IP-helpers –> Solved
- SCCM DP : The SCCM distribution point helps providing deployed packages from a near place (50 clients downloading an Office 2016 package (1 GB) or Windows updates from a local server is better than over a VPN connection. Just like domain controller, if a client is not able to reach the ‘nearest’ DP server, it will contact the next one, which can be the main site DP –> Solved
- File sever : This was the hardest question. How can we protect the file servers data and rebuild them on case of disaster, data loss or anything similar ? Let’s discuss this case more deeply
The file Server history
The file server is not stateless
What differs the file server from the other servers is that it contains changing data. In case we loose this data (data loss, ransomware, accidental deletion…), there is no built-in way to recover it
Availability or Recovery ?
There are two wishes against a file server data :
Availability : This is the need of accessing the data even if the File server goes down
Recovery : This is the need to recover the data when needed. The data recovery can be when rebuilding the server (In case of server loss) or to recover a set of files/folders as part of an Item-Level-Recovery (Deleted files, old version, ransomeware…)
The file server solution
Faced to both needs, I proposed the easiest way to achieve each need:
Availability : The easiest way to achieve availability for file servers (In case of Branch offices, minimum infrastructure) is to enable DFS-R and DFS-N. DFS-R will replicate your files to another server on the main site. DFS-N will be used to create a virtual view of shared folders permitting using the same UNC path to land on the Office’s file server and in case of failover, to land on the main site file server (where replicated files reside). This solution is very simple to be implemented. The main site server can be a target for multiple offices. The requirements are Office-MainSite bandwith and main site storage
Recovery : When we say recovery, we say Backup. The challenge was to find a ‘simple’ backup solution that :
- Backup the shares
- Restore the files using an Item Level Restore mechanism (User and Admin experience)
- Does not use local storage as the office’s infrastructure is limited (In addition that local storage does not protect against site disaster)
I was very lucky when this ‘small’ challenge was requested since I was aware of the Azure Backup MARS agent experience.
Why I was lucky ?
Backing up (and restoring data) via the Azure Backup MARS (Microsoft Azure Recovery Services) agent is very interesting in this case for several reasons:
- Deployment Simplicity : In order to backup data, you will need just to download the MARS agent, install it, and choose what and when to backup, and where the data should be backed up
- No infrastructure : You don’t need to deploy a backup infrastructure or provide local storage. The MARS agent supports Azure Cloud storage via the Azure Recovery Vaults. A Recovery Vault is a Backup Cloud space that you need to create first (One for each file server, one for each region or one for all) and then provide it during the backup configuration wizard.
- Item Level Restore : The Admin can easily make an Item Level Restore of backed up items
- Limitless capacity and retention : Azure Recovery services provides limitless storage and retention periods (up to 99 years)
- Encrypted backup data : The data backed up to the cloud are encrypted using a key you only know.
- Management from the cloud : The management of the operations (Backup operations, jobs, consumed storage, registered servers, notifications, alerts…) is easily done from a single portal, the Azure Portal Azure Backup MARS agent experience
Backup using MARS agent steps (Microsoft credit picture)
What else ?
All the requirements were met. The backup solution fits the needs and has a very small TTM (Time To Market)
If you are facing the challenge of protecting branch-offices data (connected to a main site) then do not hesitate to use ‘simple’ ways to achieve it on order to simplify your architecture and to optimize costs. Use Azure Backup to protect any workload (Even Linux is supported) and to guarantee that your data are safe on a remote location. The following table resumes my post :
How to ensure availability or recovery
Active Directory Domain controller
The failover to another DC is buit-in
Windows Server 2012 (and later) DHCP failover
Secondary remote DNS servers