How to protect and backup your Branch and Remote offices data (Files and Folders) ?

Hi everyone,

Since the first days of the adoption of an Information System by companies, backing up the workloads was crucial and a production blocker : No production without backup, no backup,  no business.

Today, companies are better mastering and understanding their backup needs, solutions and they are continually seeking for better, simple and cost effective backup software.

One of the ‘headache’ subjects that bother the majority of the backup admins and decision makers is the Remote Offices / Branch Offices (ROBO) ‘Files and Folders’ data backup.

During this post, I will show why Azure Backup via the MARS agent is your best choice to get rid of the ROBO workloads backup problematic. I will present :

  • Use cases for using Azure Backup via MARS agent
  • What do you need to know in order to be comfortable with this solution
  • What are the steps to plan and start using Azure Backup via MARS agent for your ROBO

1- Use cases for using Azure Backup via MARS agent

Azure Backup is the name of a complete enterprise backup solution allowing several backup scenarios and using the last technologies, specially the ability to back up to the cloud and to benefit from a GFS model (a model allowing  efficient Long term retention policies).

What is interesting about Azure Backup via MARS agent is that it allows you to backup your files and folders without the need to deploy a Backup Infrastructure or a Storage infrastructure. This opens up a lot of use cases :

Backup without backup infrastructure

The following picture shows the end to end data journey from your Windows Server or Workstation to the cloud storage (More details about the components later on this post). As you can note, the backup will needs only the installation of the Azure Backup Agent (MARS agent : Microsoft Azure Recovery Services agent) and to configure it to
backup data to a cloud location (Recovery Services Vault)

This is fantastic since it removes the classic requirements to enable workloads backup :

  • Backup software infrastructure (Backup server, Backup Proxy…)
  • Local storage : No need for a SAN or a NAS. Azure backup will directly send data to the cloud using an internet connection

Short and Long term retention without backup infrastructure

In addition to the great value from the first discussed statement, Azure Backup provides in the same time, Short and Long term retention within the same policies. No need for tapes, no need for external provider to handle it. Azure Backup use a GFS model to allow  Long Term retentions without any additional configuration. You can reach up to 99 years of retention period for up to 9999 recovery points (These values can change on the future).

Low bandwidth/Latency ROBO locations

The Azure Backup agent supports throttling (2) the data transfer to the cloud location (Not for all OSs). This is very important for ROBO location with limited bandwidth that prevent you from using your central backup infrastructure (Backup to a central backup repository)

 

2- What do you need to know

In this section, I will resume the important information that you need to know about the Azure Backup (Specially with the MARS agent). These information will give you the ability to decide, design and implement Azure backup into your information system.

2.1- Pricing

Fortunately, the Azure Backup pricing is very simple. It’s well explicated on the official documentation (1) but to resume:

When you backup a workload, you pay for :

  • An Azure Backup fixed cost for each backed up instance (The cost depends on the size of the data being backed up)

 

  • The storage used by the recovery points:
    • You can choose between LRS or GRS storage (3). To resume, LRS (Locally redundant storage) is a storage only available with the region where you create the Recovery Vault. GRS is a storage replicated asynchronously to another paired region providing hence, a protection against region failure, but more expensive (4) (~ * 2)
    • The redundancy cannot be changed after the first workload backup, so be sure of your decision before going forward

 

For example, if you backup 4 windows servers, you will pay:

  • 4* Azure Backup fixed cost
  • The cost of the Azure storage (cloud storage) used by the recovery points

2.2- Requirements

In this section, I will resume what do you need to technically be ready to use Azure Backup (via the MARS agent)

 

2.2.1-  Azure Level

As discussed earlier in this post, you need the location where you will send and store backups. This is called Recovery Services Vault (RSV). An RSV is a Microsoft Azure resource, which means that you need to subscribe to Azure in order to deploy it. Subscribing to Microsoft Azure is very simple, there are many ways to achieve it, depending on your needs and the billing/relation model that you want. In order to use Azure, you need to create an Azure subscription (5). After creating it, you can directly without any requirement create an Azure Recovery Vault, ready to host your backups (within minutes).

You will then need access* to the Recovery Vault in order to begin. You can benefit from the Azure RBAC roles (6) in order to have or give required permissions.

In order to backup Files and Folders via the MARS agent, you will just need:

  • The MARS agent installation file : Allowing you to install the agent on the required servers
  • The Vault credentials : Allowing the MARS agent to find and authenticate to the Azure Recovery Vault.

Both of them can be downloaded via the Azure portal via the Azure Recovery Services resource blades.

* Technically, you don’t need access to the Recovery Vault to enable backups. An Admin can send you the required information instead.

2.2.2- Local level

I mean by local level, what do you need at the server level (The server where the folders and files to be backed up) in order to start backing up :

  • A supported Operating system : Only Windows is supported, Linux is not yet supported.
  • A internet connectivity : The agent needs outbound internet connection to the Azure services in order to send data. Using a Proxy is supported. You can in addition limit the outbound flows to only Azure services public IPs (7) (And even more, only the IPs belonging to the RSV region)

 

There are limitations regarding the supported operating systems, what can you backup, how often you can backup and more. Please refer to the Azure Backup FAQ for complete information

 

2.3- Security and data confidentiality

Azure backup via the MARS agent provides many precious security aspects, let me enumerate some of them:

  • You will need a Vault credentials file in order to register an agent to a vault. Only backup admins can download such file from the Azure portal
  • Before enabling the backup, you will be prompted to provide a ‘passphrase’. A passphrase is a ‘complex password’ used to encrypt data before sending it to the RSV. Data is encrypted and send via HTTPS to the RSV where it remains encrypted. Note that without this passphrase, you will not be able to restore data in case you lose the original server (Or its configuration), the passphrase must be kept securely somewhere (You can use Azure Key Vault to store your secrets)
  • In case your server is compromised, the compromiser (Hacker, malicious admin) cannot delete you recovery points. Azure backup provides a security setting (enabled by default) that requires the ‘remover’ to login to the Azure Portal and generate a PIN code. The probability that the ‘compromiser’ owns the credentials to login to the Azure portal is small. In addition, you can benefit from the ‘MFA’ feature of Azure portal in order to more secure the portal access.
  • In case of ransomware/crypto-locker attack or infection, your backup data is protected, since the backup media is totally independent of the server.
  • Other security prevention feature are also available (8) :
    • Retention of deleted backup data: Backup data retained for 14 days after delete operation
    • Minimum retention range checks: Ensures more than one recovery point in case of attacks
    • Alerts and notifications: For critical operations like Stop backup with delete data
    • Multiple layers of security: Security PIN required for critical operations (Already mentioned)

2.4- Monitoring and reporting

Like you noticed, there is no server nor a console to install, monitor or see what is happening. All is done via the Azure Portal. You can use the Azure portal to :

  • Backup Items : View the backed up items (Server name, volume…)
  • Backup Status : You can view and show the status of the backups, with ‘filtering’ options
  • Backup jobs: You can see the backup jobs and their status. You can see the duration and the size of the backups and restore operations
  • Notifications : You can configure and see the notifications related to the jobs. Currently, you can only configure notifications based on the jobs status (Critical, Warning, Information)

Currently, there is no ‘Reporting’ feature with Azure backup via the portal. But this feature is coming very soon.

3- How to start : The plan

In this third and final section, I will present the planning steps in order to successfully plan and implement your ‘Folders and Files’ backup. The main steps are :

  1. Create a Recovery Services Vault
  2. Configure the vault
  3. Download the Recovery Vault credentials
  4. Install the MARS Agent on the server
  5. Create a backup policy and a schedule

This link shows the detailed steps to achieve the above steps : https://docs.microsoft.com/en-us/azure/backup/backup-configure-vault

The Azure Backup FAQ contains the most answers to your questions :

https://docs.microsoft.com/en-us/azure/backup/backup-azure-backup-faq

To finish, the following are my recommendations when planning to implement Azure Backup via the MARS agent:

Question / Constraint

Choice

Are my source servers located on the same region ? It’s recommended to backup data to the nearest location in order to benefit from a better performance / Latency during backup and restore operations.
Do I need to back up to the same RSV ? No, but to have a simple design, it’s better to minimize the number of RSV for the a similar servers group.
When do I need to backup to different RSV What can differentiate two Recovery Services Vault  :

–         The redundancy of the Storage (LRS or GRS)

–         The user rights on the RSV

–         The vault credentials

So :

–               If you have different ‘data’ importance, and you want to optimize the costs, you can create ‘LRS’ RSVs for less important data, and ‘GRS’ RSVs for more important and critical data

–               You can give permissions to access or manage the Recovery Service Vault. If you want different security levels for your Vault, you can create multiple RSV

–               The Vault Credentials are unique for an RSV. A user with a valid Vault credentials file (expires after 2 days) can backup data to the vault

Use the same passphrase for each server ? No. This is absolutely not recommended for the unique reason is that someone compromises the passphrase, he can access you all your server’s restore points (He will need a valid Vault credentials file)

 

Useful Links:

 

(1) Azure Backup pricing : https://azure.microsoft.com/en-us/pricing/details/backup/

(2) Azure Backup agent network throttling : https://docs.microsoft.com/en-us/azure/backup/backup-configure-vault

(3) Azure Storage redundancy : https://docs.microsoft.com/en-us/azure/storage/storage-redundancy

(4) Azure Storage pricing : https://azure.microsoft.com/en-us/pricing/details/storage/blobs-general/

(5) Designing Azure Subscriptions : https://buildwindows.wordpress.com/2016/03/30/azure-iaas-arm-architecting-and-design-series-azure-subscriptions/

(6)Azure Backup Roles : Backup Contributor, Backup Operator, Backup Reader

(7) Azure Public IP ranges : https://www.microsoft.com/en-us/download/details.aspx?id=41653

(8) Azure-backup-security-feature : https://azure.microsoft.com/en-us/blog/azure-backup-security-feature/

(9) Azure subscription and service limits, quotas, and constraints : https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits

New Azure Portal Feature : Find your Quotas and Limits values !

Hello All,

This is a quick post to support a new fresh Azure Portal feature which will help a lot of Admins in some cases.

You all know that you cannot create as Azure Resources as you want, and that there are Limits and Quotas for the number of deployed resources. Such information is very important and I can say crucial when designing your Azure infrastructure.

I can note some examples like :

  • Network Security Groups : By default, you cannot create more than 100 NSG objects within an Azure Region (Azure Resource Manager limit model, ASM limit model is per Subscription not per region). So if you are using NSGs to secure your environment, you will need to track the objects count usage –> This is the object of this post
  • Static Public IP addresses : By default, you cannot create more than 20 static Public IP addresses within an Azure Region. So monitoring and tracking this resource usage is important

You can always visit the official link for the last information about the service limits, the quotas and the constraints. Keep in mind that for several resources, you can ask the Microsoft Support to increase a limit value.

Back to the post main goal, you can by now consult the usage of your resources and the status against the quota values.

Go to the Azure Portal (Portal.azure.com) –> Subscriptions –> Select the Subscription –> Usage + Quotas

image

You can filter the items to have more customized view. You can use the link to directly open a Support case to increase the limits.

How to edit an existing Azure Custom RBAC role ?

Hello all,

Azure provides the ability to create Custom Roles in order to better fit the needs and give admins more flexible ways to choose the permissions they want to provide to users.

Many posts discuss the Azure RBAC and custom roles, here’s some materials:

In this post I will clarify the right method to modify an existing created custom role.

When you create a custom role, you configure many parameters:

  • Custom Role Name
  • Custom Role description
  • Custom Role Actions
  • Custom Role No-Actions
  • Custom Role assignable scopes

There are some scenarios where you would like to change one or more of the definitions, for several reasons:

– You already created a custom role assigned to only some scopes. You want to extend or reduce the scopes

– You decided to add or remove an Action or a No-Action to an existing custom role

– You noticed a typo on the description and you decided to change it

– And more reasons can come…

How to proceed ?

This step by step is using Azure Powershell, so download and install Azure powershell before proceeding. (Download and Install Azure powershell)

As an example, i will make several changes to the Azure Custom Role “Azure DNS Reader” that initially has the scope at the subscription level “/subscriptions/1111111-1111-1111-11111-11111111111”. The changes are:

  • New Name –> Azure DNS Zone RW
  • Change the description –> Lets you view and modify Azure DNS zone objects”Add or remove an Action –> Microsoft.Network/dnsZones/write”
  • Add or remove a No-Action –> Microsoft.Network/dnsZones/write”
  • Add a remove a scope –> “/subscriptions/222222-2222-2222-2222-2222222222222”

1- Login to Azure

Login to Azure using the following command:

Login-AzureRmAccount

2- Get the Custom Role Definition :

  • If your custom role is assigned to the default subscription : $CustomRole = Get-AzureRmRoleDefinition -Name “Azure DNS Reader”
  • If your custom role is assigned to a scope : $CustomRole = Get-AzureRmRoleDefinition -Name “Azure DNS Reader” -Scope “/subscriptions/1111111-1111-1111-11111-11111111111”

2017-02-08_13-49-41

3- Make changes* and commit

*Note that you can make all the changes and commit during last step

A- Change the role Name
$CustomRole.Name = “Azure DNS Zone RW”
$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-10-36

B- Change the role description
$CustomRole.Description = “Lets you view and modify Azure DNS zone objects”
$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-12-28

C- Add or Remove an Action
$Action = “Microsoft.Network/dnsZones/write”

$CustomRole.Actions.Add($Action)
#or to remove
$CustomRole.Actions.Remove($Action)

$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-31-48
D- Add or Remove a No-Action
$NoAction = “Microsoft.Network/dnsZones/write”

$CustomRole.NotActions.Add($Action)
#or
$CustomRole.NotActions.Remove($NoAction)

$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-34-59
E- Add or Remove a  Scope
$Scope = “/subscriptions/222222-2222-2222-2222-2222222222222”

$CustomRole.AssignableScopes.Add($Scope)
#or
$CustomRole.AssignableScopes.Remove($Scope)

$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-45-53

How to protect and backup your branch-offices data and workloads ?

Hi all,

This is a rapid post where I will share one of my last experience during a customer call for advice.

The customer have several branch offices (Tens). In each site, a ‘big’ server is deployed where several Virtual Machines are running to provide ‘vital’ infrastructure like :

  • Active Directory Domain Controller (RODC) + DHCP + DNS + Printer services
  • File Server
  • SCCM Distribution point

The question was arisen when we studied some DR and Service continuity scenarios : The branch offices workloads were under the scope, but the priority was very low, and the question was : How can I minimally protect the branch offices data with 0 investment ?

This is wasn’t a very difficult question, and the answers were like the following :

  • AD + DNS + DHCP + Printer Services :
    • AD services : When the RODC is not reachable, clients automatically contacts the primary domain controllers on the main site (Through S2S VPN or MPLS). This is a built-in AD service  –> Solved
    • DNS : The secondary DNS servers configured via DHCP are the main site DNS servers —> Solved
    • DHCP  : This is a vital service, without DHCP, clients will not obtain IP addresses and will not be able to work. The solution was to configure (since Windows Server 2012) a Hot-Standby failover relation ship with the main site. The branch-offices network device must only support IP-helpers –> Solved
  • SCCM DP : The SCCM distribution point helps providing deployed packages from a near place (50 clients downloading an Office 2016 package (1 GB) or Windows updates from a local server is better than over a VPN connection. Just like domain controller, if a client is not able to reach the ‘nearest’ DP server, it will contact the next one, which can be the main site DP –> Solved
  • File sever : This was the hardest question. How can we protect the file servers data and rebuild them on case of disaster, data loss or anything similar ? Let’s discuss this case more deeply

The file Server history

The file server is not stateless

What differs the file server from the other servers is that it contains changing data. In case we loose this data (data loss, ransomware, accidental deletion…), there is no built-in way to recover it

Availability or Recovery ?

There are two wishes against a file server data :

Availability : This is the need of accessing the data even if the File server goes down

Recovery : This is the need to recover the data when needed. The data recovery can be when rebuilding the server (In case of server loss) or to recover a set of files/folders as part of an Item-Level-Recovery (Deleted files, old version, ransomeware…)

The file server solution

Faced to both needs, I proposed the easiest way to achieve each need:

Availability : The easiest way to achieve availability for file servers (In case of Branch offices, minimum infrastructure) is to enable DFS-R and DFS-N. DFS-R will replicate your files to another server on the main site. DFS-N will be used to create a virtual view of shared folders permitting using the same UNC path to land on the Office’s file server and in case of failover, to land on the main site file server (where replicated files reside). This solution is very simple to be implemented. The main site server can be a target for multiple offices. The requirements are Office-MainSite bandwith and main site storage

Recovery : When we say recovery, we say Backup. The challenge was to find a ‘simple’ backup solution that :

  • Backup the shares
  • Restore the files using an Item Level Restore mechanism (User and Admin experience)
  • Does not use local storage as the office’s infrastructure is limited (In addition that local storage does not protect against site disaster)

I was very lucky when this ‘small’ challenge was requested since I was aware of the Azure Backup MARS agent experience.

Why I was lucky ?

Backing up (and restoring data) via the Azure Backup MARS (Microsoft Azure Recovery Services) agent is very interesting in this case for several reasons:

  • Deployment Simplicity : In order to backup data, you will need just to download the MARS agent, install it, and choose what and when to backup, and where the data should be backed up
  • No infrastructure : You don’t need to deploy a backup infrastructure or provide local storage. The MARS agent supports Azure Cloud storage via the Azure Recovery Vaults. A Recovery Vault is a Backup Cloud space that you need to create first (One for each file server, one for each region or one for all) and then provide it during the backup configuration wizard.
  • Item Level Restore : The Admin can easily make an Item Level Restore of backed up items
  • Limitless capacity and retention :  Azure Recovery services provides limitless storage and retention periods (up to 99 years)
  • Encrypted backup data : The data backed up to the cloud are encrypted using a key you only know.
  • Management from the cloud : The management of the operations (Backup operations, jobs, consumed storage, registered servers, notifications, alerts…) is easily done from a single portal, the Azure Portal Azure Backup MARS  agent experience

backup-process

Backup using MARS agent steps (Microsoft credit picture)

What else ?

All the requirements were met. The backup solution fits the needs and has a very small TTM (Time To Market)

Conclusion

If you are facing the challenge of protecting branch-offices data (connected to a main site) then do not hesitate to use ‘simple’ ways to achieve it on order to simplify your architecture and to optimize costs. Use Azure Backup to protect any workload (Even Linux is supported) and to guarantee that your data are safe on a remote location. The following table resumes my post :

Workload

How to ensure availability or recovery

Active Directory Domain controller

The failover to another DC is buit-in

DHPC

Windows Server 2012 (and later)  DHCP failover

DNS

Secondary remote DNS servers

File Server

  • Availability : DFS-R + DFS-N
  • Backup/Restore : Azure Backup via MARS agent

Azure Virtual Machines single instance SLA, a big step!

Hi all,

Early this week, Microsoft made an exciting announcement with its SLA for a single virtual machine of 99.9 ℅ : https://azure.microsoft.com/en-us/blog/announcing-4-tb-for-sap-hana-single-instance-sla-and-hybrid-use-benefit-images/

Before this announcement, single instance VMs (which are not part of an Availability Set) were not covered by an SLA. This was unattractive for workloads which do not support, afford or need a multi-instance deployment. This was generally applicable for legacy workloads and to be honest, to the majority of non-critical workloads, and for SMB workloads which do not afford investing in redundancy and HA.

Many of my customers avoided migrating workloads to Azure, just because of this, which was offered by AWS since a while.

With this announcement, customers have the warranty that an SLA of 99,9 % provided to their VMs, which means a maximum downtime of 8.76 hours per 356 days –> 44 minutes per month

Do not forget that this is only applicable for virtual machines with all disks stored on Premium Storage.

Optimize costs of your Azure Virtual Machines

Hi,

This is a very common topic which was discussed on many blog posts and forums. How to optimize the costs of the Azure virtual machines ?

There are many aspects of optimization, the first one which is also applicable for on-premises VMs is the size of the VM (Hardware optimization). If you track your Virtual Machine resource usage, you can see if your VMs are oversized, and then decide to resize them to reduce their costs. But this optimization, even if valuable, is a one-time optimization.

This is why I’m discussing here a better aspect of optimization, which is related to the uptime of the virtual machine.

The Stopped/Deallocated VM state

When you power off an Azure virtual machine, there are two possible states : Stopped and Stopped/Deallocated. The short story is that if you shutdown your VM to the Stopped state, your VM will be billable even if it’s actually stopped. But if you shutdown your VM to the Stopped/Deallocated state, your VM will not be charged during this downtime.

What is the difference, why not always use the Stop/deallocated mode ?

Many blog posts discuss the difference between both statuses. Here the key differences for ARM virtual machines:

– If you stop your Virtual Machine from inside the OS, or using a powershell/cli/api request without the Deallocate flag, the VM will be stopped but will not be de-provisioned from its Azure host (Hyper-V host). When you start your VM, it will start rapidly, and will keep all its network dynamic parameters (NIC IP Addresses)

– If you stop your virtual machine from the Azure portal, or using a powershell/cli/api request with the Deallocate flag, the VM will be sopped and de-provisioned from its Azure host (Compute resources). When you start your VM, Azure will redeploy the VM (the same VM Smile ) and you will notice a longer time for the VM to start. The VM will theoretically  have  different IP addresses if static IP addresses were not used.

What to optimize ?

You can, based on this property, schedule the Stop (Deallocated) and Start of your virtual machines that allow a downtime window during a period of time.

Examples

  • Test/Dev/Int/Rec virtual machines can be stopped during the week nights and during the weekends.
  • Virtual machines which are just used on a defined period of time can be stopped when not used
  • Even production virtual machines can be stopped if they are not used (A user file server can be stopped during the weekend)

Cost optimization gain example

This example is based on an existing SMB customer. This customer is planning to move all the non-production virtual machines to Azure IaaS. The first wave will include around 200 VMs, with the following  sizes repartition:

VM Count

Azure Size

Cost per month (744 hours)  (Windows based, NorthEurope)

Cost per month : VMs are stopped between 8PM and 7AM and Sunday ()

49

A1

€2,766.90

€1,305.35

56

D1v2

€4,286.50

€2,022.26

70

A2

€7,905.43

€3,729.58

26

D2v2

€3,980.32

€1,877.81

Total

€18,939.16

€8,935

Results:

  • A total gain of around 10K€/month –> 120k€/year
  • A cost reduction of around 55 %

How to implement it ?

There are many ways of achieving this goal, the more suitable is to use an Automation mechanism which Stop and Start the virtual machines based on a schedule.

The most suitable automation mechanism is Azure Automation, which can be used easily and without deploying any infrastructure.

There are many community participations to achieve this goal, but I like more the one published by Microsoft which give a more customizable downtime window per virtual machine, and using the ARM tags.

Here the link : https://docs.microsoft.com/en-us/azure/automation/automation-scenario-start-stop-vm-wjson-tags

What do we need to know about Azure Stack : The Q & A

Hi all,

It has been a long time since I didn’t blogged anything. A lot of news actually happened on the last few months, and in this post I will explain one of the most existing for me : Azure Stack

Azure Stack was introduced earlier this year (January) with a first Proof of Concept named TP1 (Technical preview 1). The Technical Preview goal was to give customers, consultants and early adopters a view of what Microsoft baptized as the future of Private and Hybrid cloud. But, really, what is Azure Stack ?

The modest definition 

Azure Stack is a platform (Software) that you can deploy on-premises to have similar Microsoft Azure services,  features and user experience. If you are using Microsoft Azure (The new portal, known as Ibiza portal portal.azure.com), than this is what you will get when you deploy Azure Stack on your datacenter. You will be able to leverage the Azure underlying technology on-premises, to deploy, manage and benefit from the Azure services like virtual machines, web apps, virtual networks and the list evolves. Think just that instead of typing portal.azure.com on your browser, you will type a custom URL on your domain that will land you on an Azure portal, but on your datacenter.

Is Azure Stack suitable for my company or business ?

Azure Stack brings advanced cloud technologies to your datacenter from the virtualization platform (a simple virtual machine) to the Azure App Services (A PaaS model to rapidly deploy Web Applications). So technically, Azure Stack can be used by any company aiming at least to use virtualization, but this is not enough to adopt it. As a consultant and  an Azure Architect, I think that Azure Stack is suitable for you if :

  • You are using or at least experimented the user experience, concept and different services provided by Microsoft Azure. If you have validated that Azure is suitable for your company, and you are looking for the same experience on-premises (for any reason) then Azure Stack may be a good choice (Azure Stack is consistent with Azure)
  • You are looking for a  private cloud platform which can provide the last cloud technologies and concepts. Azure Stack is born from Azure and will continually benefit from the last enhancements made and tested on the Azure public cloud platform
  • You are looking for a modern way to faster build your applications and services, which is the model based on PaaS and micro services. Azure Stack in its first version (mid 2017) will support Azure Web Apps and maybe Azure Fabric if they decide to bring it.
  • The constraints  I will mention next do not bother you

How Azure Stack will be delivered to customers ?

This is the actual debate, but Microsoft elected the winner, with a sort of inflexibility. Azure Stack will only be provided via System Integrated platforms with the freedom to choose between 3 different Hardware providers : HPE, DELL EMC and Lenovo (formerly x86 IBM servers). This means that you cannot deploy an Azure Stack platform on top of your hardware, but you will need to acquire the hardware with Azure Stack pre-packaged, and just plug it to your datacenter.

This last statement created a rage from the community, and we got two visions :

  • Microsoft is affirming that this model is the only possible way to achieve the wished Enterprise level private/hybrid cloud platform. Microsoft is stating that the integration with the hardware is a very heavy task and it prefers validating the platform and then provide a ‘ready engine’ to the customer.
  • The community is surprised that Microsoft is, first locking-out its solution for a set of non-affordable hardware providers, and secondly, not following the original virtualization and cloud ideology, which is the reuse and the optimization of the existing resources, and even, use affordable and commodity hardware.

Azure Stack licensing and prices

This is what I call the mystery question for the public because I have an early information, but due to NDA, I’m not allowed to publish it. What I can say, is that whatever the licensing model is, I think that it will be expensive, and I wonder if it will reach the SMB market. Remember, there are 3 parties involved : The hardware provider, the software provider and the integrator (which is Microsoft anyway but should be counted as a third party IMHO)

 

What if I acquire Azure Stack ? What about the test platform ?

This is a question I have asked before, and the answer was quite not sufficient. This is the summary :

  • Microsoft will provide the one Node PoC which is a one node Azure Stack platform. It’s what delivered today on TP1 and TP2. You can install Azure Stack on one node to be able to PoC, discover and make the tests you want. But, on the meanwhile we are not certain (no information) of the accuracy of the one node PoC with the System Integrated Azure Stack platform in terms of minor updates and bug fixes, and more important features.
  • You can do what you actually doing on Azure : Create a Test subscription with limited quotas, where you can make deployment tests  –> This still depends on the licensing model as we don’t want that a test platform be costly

 

What are the actual sizes of the Azure Stack Integrated System offers ?

It’s too early to speak about the capacity (CPU, Memory, Storage) of the Azure Stack platforms provided via the Integrated Systems. Things can change and I think that the final sizes will be revealed during the GA. Anyway the minimum published size today is  4 hosts with 256 GB and double sockets with 8 cores each constituting a single scale unit (Hyper-V cluster) and which will contain both the management infrastructure (VMs needed for Azure Stack to work) and your workloads (IaaS VMs, App Service Plan VMs…). Do not forget that the Azure consistency implies that the virtual machine sizes that you will be able to deploy are the same than Azure VM sizes. Hans Vredevoort has thorough articles about the Azure Stack Architecture :

Where System Integrators are placed on this whole thing ?

This is one of the questions I asked to myself. On standard products  like System Center and Windows Azure Pack, system integrators were almost mandatory to successfully deploy these products within a customer site. But the question raised up with the decision to only provide Azure Stack via System Integrated : No more need for system integrators to deploy the solution, a sort of plug and play.

This is true, but this isn’t so bad (No the case  for geeks, unfortunately Smile )

In fact, what are you integrators doing today with your customers when they call you to help them design and deploy their workloads on Azure? you are certainly happy, and this is why we should be optimistic when speaking about Azure Stack.  Because Azure Stack is Azure on your datacenter (An Azure sealed black box), customers will need you to help them first choose which Azure Stack offer (tier) to purchase, and then help them use Azure Stack, the same what you are doing  today on Azure. The consistency will make your Azure expertise vey valuable on the on-premise field.

What we will miss, is having our hands on an Azure Stack real platform to make some practice. But, theoretically, this will not be the biggest problem since we can use the one node PoC to achieve such goals. What is causing the headache for the com munity, is the near-impossibility of deploying the 1 Node PoC on our LAB at home. The minimum RAM requirement for the TP2 is 96 GB of RAM, and this is the minimum, expect up to 128 GB to start enjoying the LAB and the deployment of all the services. I don’ t know many having a real 128 GB RAM server at home.

 

Can CSP benefit from Azure Stack ?

Things are not yet dry, and Microsoft did not yet publish a clear view of the Cloud Service provider interaction with Azure Stack. But, it’s clear that that through the CSP program, CSP will be able to use Azure Stack to deliver sophisticated Azure like features. The biggest factors that may slow down CSP form using Azure Stack are :

  • Locked-down hardware providers : Cloud Service Providers are certainly partnering with hardware providers to acquire discounts and advantages when buying hardware. I’m very pessimistic regarding this factor. CSP may look to other cloud platform solution or continue build their own
  • Licensing and pricing : The introduction of  locked-hardware model may impact the margin the CSP can generate. No comment on the Software part licensing.

 

What do I think of Azure Stack and the implementation model ?

Microsoft is a great Enterprise, master minds are working there trying every day to enhance their products, creating new technologies and approaches, pivoting and changing their business model to impact the market. But, no human is bullet-proof, Microsoft can make mistakes and failures (The case for Windows phone, which is terribly not progressing ), it’s dramatically changing its business model with Azure Stack : Cloud Appliances. I’m really waiting for the licensing announcement to see which customer segment it’s targeting. But what I’m waiting for is the customer reaction. I have no idea how they will react. My first impression is that this model is not appreciated by the community and by me. I’m not against System Integrated platforms, but I’m with the virtualization and cloud early goals : Hardware reuse and cost reduction. Azure Stack is not fitting there, in addition of making restrictions on the hardware providers we can choose from, I have a bad feeling melted with a great excitement to this Azure Stack era, hoping it will find its way.

Based on my opinion, the following are the Pro and Cons of Azure Stack :

The GOOD

    • Azure services on your datacenter : This is the most exiting about Azure Stack, you can bring Azure features (IaaS, PaaS…) on your datacenter. You have the last tested*  cloud technologies on your hand. If you are avoiding using the public cloud platforms for any reason (Privacy, Compliance, trust, network connectivity) but on the same time wishing using the provided features, then Azure Stack is for you
    • Plug-and-Play model : The System Integrated model will reduce the TCO, by bringing a ready-to-use private cloud platform
    • A true Consistent and Hybrid cloud platform : Azure Stack is a real advantage for customers using or planning to use Azure services, since the consistency between the platform is guaranteed. You can use the same approaches, design decision factors, tools, scripts and features. You no longer need two different models to manage your cloud and on-premise platform, thus reducing significantly the IT efforts which can be spent on real business concerns (Deploying apps, migrating, enhancing…)

The Bad

    • The other side of System Integrated model : Hardware locking was never a good idea, depriving the customer from freely choosing  its hardware provider, and hence better control the costs. This model will reduce the early adopters market and hence can slow down this beautiful product from being used widely.

* Azure Stack will bring features already used on Azure, so we are sure that the features were widely tested on the public cloud

If you think the System Integrated model is not suitable for you, support this idea, you can participate and influence changing the Azure Stack future : Provide an installable version of Azure Stack