Optimize costs of your Azure Virtual Machines


This is a very common topic which was discussed on many blog posts and forums. How to optimize the costs of the Azure virtual machines ?

There are many aspects of optimization, the first one which is also applicable for on-premises VMs is the size of the VM (Hardware optimization). If you track your Virtual Machine resource usage, you can see if your VMs are oversized, and then decide to resize them to reduce their costs. But this optimization, even if valuable, is a one-time optimization.

This is why I’m discussing here a better aspect of optimization, which is related to the uptime of the virtual machine.

The Stopped/Deallocated VM state

When you power off an Azure virtual machine, there are two possible states : Stopped and Stopped/Deallocated. The short story is that if you shutdown your VM to the Stopped state, your VM will be billable even if it’s actually stopped. But if you shutdown your VM to the Stopped/Deallocated state, your VM will not be charged during this downtime.

What is the difference, why not always use the Stop/deallocated mode ?

Many blog posts discuss the difference between both statuses. Here the key differences for ARM virtual machines:

– If you stop your Virtual Machine from inside the OS, or using a powershell/cli/api request without the Deallocate flag, the VM will be stopped but will not be de-provisioned from its Azure host (Hyper-V host). When you start your VM, it will start rapidly, and will keep all its network dynamic parameters (NIC IP Addresses)

– If you stop your virtual machine from the Azure portal, or using a powershell/cli/api request with the Deallocate flag, the VM will be sopped and de-provisioned from its Azure host (Compute resources). When you start your VM, Azure will redeploy the VM (the same VM Smile ) and you will notice a longer time for the VM to start. The VM will theoretically  have  different IP addresses if static IP addresses were not used.

What to optimize ?

You can, based on this property, schedule the Stop (Deallocated) and Start of your virtual machines that allow a downtime window during a period of time.


  • Test/Dev/Int/Rec virtual machines can be stopped during the week nights and during the weekends.
  • Virtual machines which are just used on a defined period of time can be stopped when not used
  • Even production virtual machines can be stopped if they are not used (A user file server can be stopped during the weekend)

Cost optimization gain example

This example is based on an existing SMB customer. This customer is planning to move all the non-production virtual machines to Azure IaaS. The first wave will include around 200 VMs, with the following  sizes repartition:

VM Count

Azure Size

Cost per month (744 hours)  (Windows based, NorthEurope)

Cost per month : VMs are stopped between 8PM and 7AM and Sunday ()





















  • A total gain of around 10K€/month –> 120k€/year
  • A cost reduction of around 55 %

How to implement it ?

There are many ways of achieving this goal, the more suitable is to use an Automation mechanism which Stop and Start the virtual machines based on a schedule.

The most suitable automation mechanism is Azure Automation, which can be used easily and without deploying any infrastructure.

There are many community participations to achieve this goal, but I like more the one published by Microsoft which give a more customizable downtime window per virtual machine, and using the ARM tags.

Here the link : https://docs.microsoft.com/en-us/azure/automation/automation-scenario-start-stop-vm-wjson-tags

What do we need to know about Azure Stack : The Q & A

Hi all,

It has been a long time since I didn’t blogged anything. A lot of news actually happened on the last few months, and in this post I will explain one of the most existing for me : Azure Stack

Azure Stack was introduced earlier this year (January) with a first Proof of Concept named TP1 (Technical preview 1). The Technical Preview goal was to give customers, consultants and early adopters a view of what Microsoft baptized as the future of Private and Hybrid cloud. But, really, what is Azure Stack ?

The modest definition 

Azure Stack is a platform (Software) that you can deploy on-premises to have similar Microsoft Azure services,  features and user experience. If you are using Microsoft Azure (The new portal, known as Ibiza portal portal.azure.com), than this is what you will get when you deploy Azure Stack on your datacenter. You will be able to leverage the Azure underlying technology on-premises, to deploy, manage and benefit from the Azure services like virtual machines, web apps, virtual networks and the list evolves. Think just that instead of typing portal.azure.com on your browser, you will type a custom URL on your domain that will land you on an Azure portal, but on your datacenter.

Is Azure Stack suitable for my company or business ?

Azure Stack brings advanced cloud technologies to your datacenter from the virtualization platform (a simple virtual machine) to the Azure App Services (A PaaS model to rapidly deploy Web Applications). So technically, Azure Stack can be used by any company aiming at least to use virtualization, but this is not enough to adopt it. As a consultant and  an Azure Architect, I think that Azure Stack is suitable for you if :

  • You are using or at least experimented the user experience, concept and different services provided by Microsoft Azure. If you have validated that Azure is suitable for your company, and you are looking for the same experience on-premises (for any reason) then Azure Stack may be a good choice (Azure Stack is consistent with Azure)
  • You are looking for a  private cloud platform which can provide the last cloud technologies and concepts. Azure Stack is born from Azure and will continually benefit from the last enhancements made and tested on the Azure public cloud platform
  • You are looking for a modern way to faster build your applications and services, which is the model based on PaaS and micro services. Azure Stack in its first version (mid 2017) will support Azure Web Apps and maybe Azure Fabric if they decide to bring it.
  • The constraints  I will mention next do not bother you

How Azure Stack will be delivered to customers ?

This is the actual debate, but Microsoft elected the winner, with a sort of inflexibility. Azure Stack will only be provided via System Integrated platforms with the freedom to choose between 3 different Hardware providers : HPE, DELL EMC and Lenovo (formerly x86 IBM servers). This means that you cannot deploy an Azure Stack platform on top of your hardware, but you will need to acquire the hardware with Azure Stack pre-packaged, and just plug it to your datacenter.

This last statement created a rage from the community, and we got two visions :

  • Microsoft is affirming that this model is the only possible way to achieve the wished Enterprise level private/hybrid cloud platform. Microsoft is stating that the integration with the hardware is a very heavy task and it prefers validating the platform and then provide a ‘ready engine’ to the customer.
  • The community is surprised that Microsoft is, first locking-out its solution for a set of non-affordable hardware providers, and secondly, not following the original virtualization and cloud ideology, which is the reuse and the optimization of the existing resources, and even, use affordable and commodity hardware.

Azure Stack licensing and prices

This is what I call the mystery question for the public because I have an early information, but due to NDA, I’m not allowed to publish it. What I can say, is that whatever the licensing model is, I think that it will be expensive, and I wonder if it will reach the SMB market. Remember, there are 3 parties involved : The hardware provider, the software provider and the integrator (which is Microsoft anyway but should be counted as a third party IMHO)


What if I acquire Azure Stack ? What about the test platform ?

This is a question I have asked before, and the answer was quite not sufficient. This is the summary :

  • Microsoft will provide the one Node PoC which is a one node Azure Stack platform. It’s what delivered today on TP1 and TP2. You can install Azure Stack on one node to be able to PoC, discover and make the tests you want. But, on the meanwhile we are not certain (no information) of the accuracy of the one node PoC with the System Integrated Azure Stack platform in terms of minor updates and bug fixes, and more important features.
  • You can do what you actually doing on Azure : Create a Test subscription with limited quotas, where you can make deployment tests  –> This still depends on the licensing model as we don’t want that a test platform be costly


What are the actual sizes of the Azure Stack Integrated System offers ?

It’s too early to speak about the capacity (CPU, Memory, Storage) of the Azure Stack platforms provided via the Integrated Systems. Things can change and I think that the final sizes will be revealed during the GA. Anyway the minimum published size today is  4 hosts with 256 GB and double sockets with 8 cores each constituting a single scale unit (Hyper-V cluster) and which will contain both the management infrastructure (VMs needed for Azure Stack to work) and your workloads (IaaS VMs, App Service Plan VMs…). Do not forget that the Azure consistency implies that the virtual machine sizes that you will be able to deploy are the same than Azure VM sizes. Hans Vredevoort has thorough articles about the Azure Stack Architecture :

Where System Integrators are placed on this whole thing ?

This is one of the questions I asked to myself. On standard products  like System Center and Windows Azure Pack, system integrators were almost mandatory to successfully deploy these products within a customer site. But the question raised up with the decision to only provide Azure Stack via System Integrated : No more need for system integrators to deploy the solution, a sort of plug and play.

This is true, but this isn’t so bad (No the case  for geeks, unfortunately Smile )

In fact, what are you integrators doing today with your customers when they call you to help them design and deploy their workloads on Azure? you are certainly happy, and this is why we should be optimistic when speaking about Azure Stack.  Because Azure Stack is Azure on your datacenter (An Azure sealed black box), customers will need you to help them first choose which Azure Stack offer (tier) to purchase, and then help them use Azure Stack, the same what you are doing  today on Azure. The consistency will make your Azure expertise vey valuable on the on-premise field.

What we will miss, is having our hands on an Azure Stack real platform to make some practice. But, theoretically, this will not be the biggest problem since we can use the one node PoC to achieve such goals. What is causing the headache for the com munity, is the near-impossibility of deploying the 1 Node PoC on our LAB at home. The minimum RAM requirement for the TP2 is 96 GB of RAM, and this is the minimum, expect up to 128 GB to start enjoying the LAB and the deployment of all the services. I don’ t know many having a real 128 GB RAM server at home.


Can CSP benefit from Azure Stack ?

Things are not yet dry, and Microsoft did not yet publish a clear view of the Cloud Service provider interaction with Azure Stack. But, it’s clear that that through the CSP program, CSP will be able to use Azure Stack to deliver sophisticated Azure like features. The biggest factors that may slow down CSP form using Azure Stack are :

  • Locked-down hardware providers : Cloud Service Providers are certainly partnering with hardware providers to acquire discounts and advantages when buying hardware. I’m very pessimistic regarding this factor. CSP may look to other cloud platform solution or continue build their own
  • Licensing and pricing : The introduction of  locked-hardware model may impact the margin the CSP can generate. No comment on the Software part licensing.


What do I think of Azure Stack and the implementation model ?

Microsoft is a great Enterprise, master minds are working there trying every day to enhance their products, creating new technologies and approaches, pivoting and changing their business model to impact the market. But, no human is bullet-proof, Microsoft can make mistakes and failures (The case for Windows phone, which is terribly not progressing ), it’s dramatically changing its business model with Azure Stack : Cloud Appliances. I’m really waiting for the licensing announcement to see which customer segment it’s targeting. But what I’m waiting for is the customer reaction. I have no idea how they will react. My first impression is that this model is not appreciated by the community and by me. I’m not against System Integrated platforms, but I’m with the virtualization and cloud early goals : Hardware reuse and cost reduction. Azure Stack is not fitting there, in addition of making restrictions on the hardware providers we can choose from, I have a bad feeling melted with a great excitement to this Azure Stack era, hoping it will find its way.

Based on my opinion, the following are the Pro and Cons of Azure Stack :


    • Azure services on your datacenter : This is the most exiting about Azure Stack, you can bring Azure features (IaaS, PaaS…) on your datacenter. You have the last tested*  cloud technologies on your hand. If you are avoiding using the public cloud platforms for any reason (Privacy, Compliance, trust, network connectivity) but on the same time wishing using the provided features, then Azure Stack is for you
    • Plug-and-Play model : The System Integrated model will reduce the TCO, by bringing a ready-to-use private cloud platform
    • A true Consistent and Hybrid cloud platform : Azure Stack is a real advantage for customers using or planning to use Azure services, since the consistency between the platform is guaranteed. You can use the same approaches, design decision factors, tools, scripts and features. You no longer need two different models to manage your cloud and on-premise platform, thus reducing significantly the IT efforts which can be spent on real business concerns (Deploying apps, migrating, enhancing…)

The Bad

    • The other side of System Integrated model : Hardware locking was never a good idea, depriving the customer from freely choosing  its hardware provider, and hence better control the costs. This model will reduce the early adopters market and hence can slow down this beautiful product from being used widely.

* Azure Stack will bring features already used on Azure, so we are sure that the features were widely tested on the public cloud

If you think the System Integrated model is not suitable for you, support this idea, you can participate and influence changing the Azure Stack future : Provide an installable version of Azure Stack

How can we use Azure Cool Storage for Archiving data ?

Azure Cool Storage was introduced early this summer as a new Azure Storage Account tier type. The story is that before this time, the Azure Storage provides only one kind of Storage Accounts, which is for general purpose, and which includes two performance tiers, this makes using a Cloud Storage Account for storing daily data (Virtual Machines disks, File shares…) the same than archival and backup data.

A new type of Storage Accounts was introduced which brought two different tiers of Storage Accounts : Hot and Cool. The following picture shows the Azure Storage Account types :


This article explains how you can use the Azure Cool Storage tier for Archival purpose.

1- Scenarios proposal

1.1- Use StorSimple physical Array

Azure StorSipmle is a hybrid storage array which provides features to store data locally and on the cloud. StorSimple has many advantageous and offers a lot of interesting features like :

  • Tiering: As your data ages, automatically tier it and move it from solid-state disk to spinning hard drive to the cloud
  • Local volume: Ensure any primary data that you specify as local will stay on the physical StorSimple device
  • Cloud snapshot: Enable rapid movement of volume snapshots to the cloud to support integrated disaster, as well as dev/test and business analytic environments
  • Data deduplication: Deduplicate data inline to reduce your storage growth rates

The following capabilities will make the proposed design successful:

  • Azure Storsimple supports tiering on the Cloud –> Cold data will be moved from the local storage to the Cloud Storage
  • Azure StorSimple supports Azure Cool Storage Accounts as a Cloud Storage
  • Azure StorSimple can be exposed via iSCSI

The scenario is like the following :

  • A Cool Azure Storage Account (or more)  will be created an linked to the StorSimple.
  • A Volume (or more) will be created on the StorSimple
  • The volume (or volumes) will be presented to a Windows File Server
  • File Shares will be accessible for the Admins or the tools which will move data to be archived to these shares

–> Thanks to the Tiering feature, data which is Cold will be automatically tiered to Azure Storage Account. Because this data is not willing to change; we expect 100 % of cold data and hence tiered data

NB : Keep in mind the different StorSimple limits : https://azure.microsoft.com/en-us/documentation/articles/storsimple-limits/

1.2- Use StorSimple virtual array

If you don’t or are not planning to use the StorSimple physical array, you can use the StorSimple Virtual Array which was introduced early this year. The Virtual Array can be hosted by a Hyper-V or VMware virtualization platform. The StorSimple virtual array provides similar features than the physical array except the local storage, which should be provisioned from the virtualization platform storage. Keep in mind that the virtual Array provides less capacity than the physical one (Look to the first table on https://azure.microsoft.com/en-us/documentation/articles/storsimple-ova-overview/).

The Virtual Array has an additional feature compared to the physical one, which is the File Share feature. In fact, it can exposes directly SMB shares to the users.

The design is like the following :

Variant 1

  • A Cool Azure Storage Account (or more)  will be created an linked to the StorSimple.
  • A Volume (or more) will be created on the StorSimple
  • The volume (or volumes) will be presented to a Windows File Server
  • File Shares will be accessible for the Admins or the tools which will move data to be archived to these shares

Variant 2

  • A Cool Azure Storage Account (or more)  will be created an linked to the StorSimple
  • A file share (or more) will be created
  • File Shares will be accessible for the Admins or the tools which will move data to be archived to these shares

–> Thanks to the Tiering feature, data which is Cold will be automatically tiered to Azure Storage Account. Because this data is not willing to change, we expect 100 % of cold data and hence tiered data

1.3- Use Microsoft Azure Storage Explorer

Microsoft Azure Storage Explorer is a tool developed and maintained by Microsoft which allow managing and making different operations on the Azure Storage accounts.

The tool can be downloaded here : http://storageexplorer.com/

NB : Please, do not confuse Microsoft Azure Storage Explorer and Azure Storage Explorer. This latter is an open sourced project which is not very well maintained (But was here first)

The scenario is like the following :

  • Install MASE on a server (We recommend a dedicated server)
  • Use AZcopy (The MASE Command line) to copy the data to be archived to the Azure Cool Storage Account. You can use the Storage Explorer UI either. The AZCopy tool is good if you want to schedule data movement or create scripts.
  • Use MASE to explorer and download the Azure Storage Account data

NB: MASE does not provide mounting a Storage Account Container, and managing the content (Browser, Copy, Delete…) via a drive letter.

1.4- Use a third party tool

There are different Third Party Tools which allow managing and accessing Azure Storage Accounts. The difference regarding MASE is that these tools allow mounting the Storage Account containers to virtual drives. This way they can be accessed directly via the windows explorer.

The scenario is like the following :

  • Install the tool on a server (We recommend a dedicated server)
  • Attach a Cool Azure Storage Account using the tool and create a mount point (Or a virtual drive)
  • Admins can move data to be archived directly to the mount points

The following are two tools which can be used for this purpose:

2- Scenarios Comparison



My Rating

StorSimple Physical Array Best capacity and performance Expensive or needs minimum credit if purchased under EA

No SMB file shares support like the SVA

Use it if you already have a physical Storsimple

Enterprise solution

StorSimple Virtual Array Support SMB File Shares (Windows File Share + Share Permissions)


Needs a virtualization platform and a local storage capacity  (10 % of the total storage)

Limited capacity if exposed via iSCSI (5 TB per Volume)

Use it if you don’t have a physical Storsimple or you don’t want to deploy a File Server

Enterprise solution

Microsoft Azure Storage Explorer Free

Simple to use

Storage Accounts are only exposed via the tool UI and command line Personal or occasional use
Third Party Tools Simple to use May not be scalable for Free editions Personal or occasional use

3- Azure cool storage costs

Assumptions :

  • I do not count blobs operations which are billed too. We estimate that the cost of these operations will not pring impact on the total cost. In addition, it’s not predictable
  • I assume that there is no change on the already written data (archived data does not change over time). The change will inquire write costs
  • I’m using Public prices North Europe region (Euro)
  • Do not forget that the pricing is per Month for the Storage and per Operation (per GB) for Write, change and retrieval

Storage per Month

Data write per Operation

Data retrieval per operation





LRS/GRS + outbound data transfers

1 GB

0.0084 €

0.0169 €

0,0021 €

0,0042 €

0,0818 € = 0,0084 + 0.0734

10 GB

0.084 €

0.169 €

0,021 €

0,042 €

0,818 € = 0,084 + 0.734

100 GB

0.84 €

1,69 €

0,21 €

0,42 €

8,18 € = 0,84 + 7,34

1 TB

8,4 €

16,9 €

2,1 €

4,2 €

81,8 € = 8,4 + 73,4

Costs example with a Cool storage account in GRS redundancy:

  • If we store 1 TB of data we’ll pay: 16,9 € / month + 4,2 € once
  • If we download 1 GB we’ll pay: 0.0818 € once
  • If we download 100 GB we’ll pay: 8,18 € once

Change Azure ARM VM size

Hi all,

Microsoft has published an article describing how to resize an Azure Virtual Machine, and shown that on the major cases, using the Portal or PowerShell is enough to resize the VM within few minutes with a short downtime (Reboot). I have discussed that on a previous post. You can see that on same cases, the host where the VM is deployed does not support the target size, so you will not be able to resize your VM without redeploying it.

I thought that the ‘Redeploy’ feature will redeploy your VM on another host supporting more VM sizes, but I think that the it redeploys it on the same ‘Pool’ with same characteristics.

So I decided to write a quick script (you can use the functions or adapt it to your context) to easily resize the VM, in few minutes.

The script will do the following :

1- Remove the VM

2- Construct the new VM object with the new Size

3- Create the new VM

You can download the script from the following link: https://gallery.technet.microsoft.com/Change-Azure-ARM-VM-size-039cda22

Is Microsoft Azure ready for IaaS ?


It’s almost one year since Azure Resource Manager was announced as Generally Available. This was a great day for me, since as per my point of view, this was the beginning of Azure IaaS as an Enterprise Public cloud Platform.

Since that day, a great customer I’m working for decided to accept the challenge and implement Azure IaaS on to of Azure Resource Manager. A decision which  was mixed with some fear and perplexity. And today here we are : What happens after One year ? Is Azure IaaS Resource Manager is ready for Enterprise ?

I will need tens of pages to describe all the story : Decisions, Challenges, missing features, bad and good designs, workarounds, patience. But to resume these are the main conclusions I may sate:

1- Building blocks of IaaS

IaaS (Infrastructure as a Service) can be ‘jargon’ speaking replaced  by Virtual Machines on the Cloud (Private or Public). To be able to build a complete platform using Iaas, the next build blocks must be present :

  • Compute : compute is the heart of IaaS, since it’s the heart of a Virtual Machine. It’s basically defined by : CPU and Memory (RAM) capabilities ie the principal Hardware Configuration. Compute decides of the performance of your workloads
  • Storage : Storage is  where all data reside and to which data is written and read. The quality of the Storage will decide of the quality of your workloads
  • Network : Network is the veins of the infrastructure, through it all information and data transit. It impacts directly the performance and the quality of  your workloads.

In addition to the above key building blocks, there are other factors which we judge ‘mandatory’ for an Infrastructure as a Service platform

  • Security : It’s the tools, features and functionalities controlling the security of the platform
  • Operations and Management : It’s the tools and the functionalities provided to manage and operate your platform
  • Backup : Mandatory and not an option, backup features are a key point to choose which platform and solution to use


2- Where is Microsoft Azure in all of this ?

The following are my conclusions and my thoughts about how Azure (my focus is on Resource Manager) performs regarding IaaS building blocks:

2.1- Compute

Microsoft Azure provides a rich offer on the Compute part. It’s continually evolving and providing new configurations. The offer difference is mainly about Compute Hardware (CPU type, Memory type) or the configuration itself (Core/Memory ratio, supported disk count). With the Public Cloud platforms, we may first be uncomfortable with the hardware configurations which we may find not standard (3.5 GB of RAM in of 4 GB!), but this is it, and our vision and ‘on-prem’ habits should change. Azure provides today different levels of compute configurations with different pricing variations, and I estimate the offer rich till today. Microsoft is progressively adding new configurations and compute offers.

Azure Virtual Machine series and sizes: https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-sizes/

Azure Virtual Machine Pricing : https://azure.microsoft.com/en-us/pricing/details/virtual-machines/

Positive points

Negative points

Rich configurations offer
Continuous evolution
No SLA for Single Virtual Machine (You need to split your service on at least two VMs to achieve SLA)

No intermediate hardware configurations (From 7 to 14 GB RAM for example)

Positive points : Rich configurations offer, Continuous evolution

Negative Points : No SLA for Single Virtual Machine (You need to split your service on at least two VMs to achieve SLA)

2.2- Storage

Microsoft Azure currently provides two kind of Storage types : Standard and Premium. The difference is mainly about Performance, and consequently Pricing. While Premium Storage provides interesting and high performance, it’s relatively expensive and does not follow the ‘Pay as you Consume’ model, an extremely negative point. Standard Storage is not expensive, but we can’t say the same thing about Performance. Capped at 500 IOPS/disk, you cannot expect  results for your workloads requiring on storage performance.

Positive points

Negative points

Two Storage classes
Pricing for Standard Storage
Standard Storage performance is poor
Premium Storage pricing model : Expensive + ‘No pay as you go’
Should see more Storage offers (Cold Storage, Archive Storage…)

2.3- Network

Microsoft Azure Networking concept is very interesting. You can ‘almost’ translate you on-prem networking configuration on Azure. The concept of Virtual Networks and Subnets is easy to adopt. Still, passing from the VLAN based networks to this type of flat networks may be cumbersome for some people, but to be fair : It’s the same concept. Interconnections with on-premises networks or other virtual networks on other Azure regions is also provided by Azure via two models : VPN and Express Route. VPN is the standard way to connect two sites using commodity hardware and the internet network. Express Route is a Microsoft offer which uses a Private circuit (provided of course by a Network provider) to interconnect your on-premises sites to Azure datacenters with high bandwidth / Low latency properties. But on the meanwhile there still some missing features which may stop you or slowing you down indeed. Many customers want to use their standard products ad tools for Network Routing/Firewalling purposes (VPN, Firewall, IDS, IPS…) and to avoid using the Azure Network Security Groups feature (NSG –> Many drawbacks), for that many MS partners provide products adapted for Azure : Virtual Appliances (Fortinet, baracuda…). There are two big limitations blocking customers using Virtual Appliances:

  • VA high Availability : Since VA is an Azure VM, and since there is no SLA for single VM, you need to deploy at least two VAs and load balance traffic between them. But with 2 VAs we hit the load balancing limit. Azure Load balancer (which is the only way to achieve Lad balancing on Azure, since VIP are not supported) does not support balancing traffic at the IP level, protocol and a port is mandatory, hence no HA for virtual appliances.
  • UDR on Gateway Subnet : In order to use the native VPN and Express Route offer from Microsoft, you need to deploy a Gateway at the Virtual Network level. The issue today is that if you want to use a VA associated with the Azure Gateway to connect to other sites, it will be impossible. UDR (User Defined Routing) is a feature permitting changing the routing for your Azure subnets ie you can control where the next hops of your packets will land. This is very interesting since you can send all your Azure traffic to the Virtual Appliance for filtering. Unfortunately, since UDR cannot be applied to the Gateway Subnet, if you use a VA and the Azure Gateway to connect Azure to other sites, then all the inbound traffic to Azure via the Gateway will land directly on the subnets and not the VA : The combination of a VA + Gateway is impossible. In the other hand, you can use the VA to create a VPN with your sites, but as stated on the first limitation, there is no SLA for VA (Azure gateway is under an SLA of 99.9)


Positive points

Negative points

Simple to understand
Simple to design and deploy
Network Security Group is not a scalable Security solution

No HA virtual Appliances (No Load Balancing at the IP level)

No UDR on the Gateway Subnet

2.4- Security

Security is a huge topic, since Security can touch all the fabric level : Access Security, Network Security, Confidentiality, Integrity… and so on. But this post is intended to see if globally, Azure provides features and requirement for Security. Let’s talk about some Security aspects at different levels:

  • RBAC : Role Based Access Control was introduced last year with the new Portal to add the capacity to use pre-defined roles to limit the Access to the Azure resources. This was an achievement and gave customers more control about how their users and admin can manage the resources. The introduction of ‘Custom roles’ at the end of 2015 was just amazing.
  • Storage : Usually when we say Storage and Security, we say Encryption. Azure Storage does not support Encryption at Rest, unfortunately. This feature was introduced on Preview few weeks ago and supports only Microsoft managed keys and no Customers keys (Encryption at Rest first goal is to avoid MS access users data, so …), support for Customers keys is in progress. Today, if you want to achieve encryption for your IaaS VMs storage, you can enable Bitlocker on your Windows VMs as a solution. But I don’t think that managing bitlocker is appreciated by customers.
  • Network : This is actually a hard point on Azure. The feature provided by Microsoft for Network filtering and ACLing is Network Security Groups (NSG). NSG are  very difficult to managed and to implement, in addition to the limitations. The alternative is to use Virtual Appliances ie leaders products. But as stated on 2.3- Network, there are many limitations so far
  • Compliance and Trust : Microsoft Azure is adding day after day Compliance and attestation about the compliancy of its Cloud Platform. Nothing to say about it. https://www.microsoft.com/en-us/trustcenter/Compliance


Positive points

Negative points

Compliance and trust
Partner Solutions for Security
Storage Encryption at Rest (Only on Preview + No customer managed keys)

Network Security features (weak points)

2.5- Operations and Management 

Imagine you got a Mercedes S Class with the best options and engine, but without a dash (This is unlikely but just image it), you will drive like a blind, no Speed, no signals, nothing. I’m sure you will not buy this car and prefer a bicycle instead. But this is not the case for Microsoft Azure and specially IaaS. A bunch of tools and solutions are provided to manage and operate the platform. I can enumerate the Azure Portal itself, Azure Powerhell, CLI as the basic management ways. Azure provide Audit Logs for all the operations logs, notifications, dashboards and diagnostic logs too. Azure also provides OMS (Operations Management System) which include a monitoring feature, and the catalog is growing. The least and not the last, Azure Security center which gives you a Security view of your IaaS platform.

Positive points

Negative points

A bunch of tools for management and monitoring OMS is cool, but not all the features are yet included

A lot of tools means a lot of needed knowledge

2.6- Backup

The last point of this post is Backup. After one year of Struggle, Microsoft announced on Mars the Public Preview of Azure Backup for IaaS V2, finally. Azure Backup for IaaS V1 is GA from long time ago, but before I’m mainly talk about Azure ARM, this was my struggle. This capability prevents us from using Azure for Production since the completion of the Pilot (End 2015). No Backup means No Production means nothing.  Fortunately, GA is expected at Q2, so starting from July, Azure IaaS V2 customers will be able to backup their Virtual Machines and hence, use them for production. Customers can from now, PoC the solution, and create their policies, since no major changes on operations or design will affect it (https://buildwindows.wordpress.com/2016/04/13/azure-backup-with-azure-recovery-services-features-and-limitations/)

Positive points

Negative points

GA expected before July No supported Backup solution for IaaS V2 today

3- The Verdict

I’m an Azure lover, I trust and believe in this solution as a believed on Hyper-V, since 2008 R2 SP1 vs the giant VMware. The question is : Is Azure ready for IaaS today ?

The answer : Yes for Envisioning, Design and PoC, and starting from this summer* for Production

* In this Summer, many missing features are expected to see birth like Azure Backup GA, UDR on Subnet Gateway, Storage Encryption GA and VA high Availability. This will be enough to finally benefit for the Power of Azure

Azure Backup with Azure Recovery Services : Features and limitations

Hi all,

It has been  days since Microsoft announced the Public Preview of Azure Backup via Azure Recovery Services. In this post I will enumerate the different features and limitations of the service, to help you decide if it fits your needs.

NB : This post is only related to IaaS part of Azure Backup

The following is the agenda of this post :

Introduction to Azure Backup via Recovery Services

Azure Backup for Azure IaaS features (Current and Coming)

Azure Backup for Azure IaaS  limitations

1- Introduction to Azure Backup via Recovery Services

Azure Backup was released first time under Azure Backup vaults, and it was only supporting classic Azure IaaS (Azure Service Management ie IaaS v1). With the GA of the Azure Resource Manager stack on summer 2015, IaaS V2 users were not able to use Azure Backup to protect their V2 virtual machines. This was the first blocker of the ARM stack adoption and one of the most wanted feature regarding the ARM platform.



After 10 months of struggle, Microsoft announced the Public Preview of Azure Backup supporting IaaS V2 virtual machines. It’s a real alleviation for Azure IaaS V2 users, but also for all Azure users planning to use Azure backup features. The main difference is that Azure Backup is now part of Azure Recovery Services vaults, and no longer Azure Backup vaults. Azure Backup vaults still exist under the ASM stack, but it’s clear that sooner or later, all will be integrated to Azure Recovery Services.

Azure Recovery Services include both Azure Backup and Azure Site Recovery supporting both ASM and ARM stacks. This is what we call great news:

  • Azure Recovery Services is integrated to the new portal (Ibiza portal)
  • Azure Backup and ASR under Recovery Services vaults support both ASM and ARM stacks

Azure Backup under Recovery Services vaults support the 4 backup scenarios:

  • Azure Backup Server or Agent based:
    • Azure Backup Agent to Azure –> Backup files and foders to Azure Storage
    • Azure Backup with System Center Data Protection Manager –> Backup Hyper-V VMs, SQL server, SharePoint, files and folders to Azure Storage
    • Azure Backup with Azure Backup Server (MABS, code name Venus) –> Backup Hyper-V VMs, SQL server, SharePoint, files and folders to Azure Storage
  • Azure Backup on the Azure Service Fabric :
    • Azure Backup for IaaS VMs –> Backup Classic and ARM Azure Virtual Machines


This post will only detail Azure Backup for IaaS virtual machines

2- Azure Backup for Azure IaaS features (Current and Coming)

Azure Recovery Services is currently under Public Preview. The following are the features of Azure Backup and the expected features that will come with GA:

  • Backup and Restore ARM and ASM Azure virtual machines (V1 and V2)
  • Based on backup policies : Two backup schedules exist : Daily and Weekly. This way you can define backups which occur daily or weekly
  • Azure Backup provides different retention periods possibility : Daily, Weekly, Monthly and yearly. Microsoft officially stated a maximum retention period of 99 years, however, thanks to Azure Backup flexibility, you can have unlimited retention period, up to 9999 years. This way, you can achieve long term retention using the same policy and mechanism (9999 days for daily backups, 9999 weeks for weekly backups,9999 months for monthly retentions ,9999 years for yearly retention)
  • Azure Backup provides 3 recovery point consistency types : Application, File and Crash consistent recovery points. You can consult the documentation to get the requirements and prerequisites for each type
  • The Backup Vault’s Storage redundancy can be GRS or LRS. GRS is more secure (Data is replicated between two regions) but more expensive (LRS *2), LRS is less secure (Locally Redundant) but cheaper. As per my experience, because the Azure Backup pricing is per protected instance (And the price is relatively high), you will notice that the Storage cost is a small fraction of the Azure Backup instances cost, so using GRS will not really impact the bill.
  • Azure Backup use incremental backups : The first recovery point is a full backup, the next ones are incremental backups : This reduce the consumed backup storage. Due to the Azure Backup design and mechanism, incremental backups will not impact the restore time.
  • Simple pricing model : The cost of Azure Backup is like the following : Total Cost = Instance Cost + Consumed Storage. If you know the daily change or growth of your data, than you can easily predict the backup cost. See this link for Azure Backup pricing :  https://azure.microsoft.com/en-us/pricing/details/backup/
  • A backup operation consist of two phases : Snapshot phase and Data transfer phase. The snapshot phase occur when the scheduled moment comes. The data transfer he backup vault begins just after the snapshot completion. This operation lay take up to 8 hours during rush hours but will always completes before 24 hours.
  • Azure Backup provides 99,99 availability SLA for Backup and Restore, monthly based. This is only applicable for the GA product.
  • Currently, two restore options are available
    • To a Virtual Machine : A new Virtual Machine is created
    • To a Storage Account : VHDs can be restored to a Storage Account
  • I expect some features to come with and post GA, but this my own thoughts, since this is what actually implemented with DPM and MABS :
    • Backup/Restore of Files and folders from a VM recovery point
    • Backup/Restore SQL or/and MySQL databases directly from a VM

3- Azure Backup for Azure IaaS limitations

  • Azure Backup does not currently support Premium Storage virtual machines. This feature will released probably during the GA
  • Currently, the daily backup supports 1 recovery point per day ie you cannot backup a Virtual Machine more than once time a day. To achieve this, use the ‘manual backup’ to schedule more than one backup a day. Keep in mind that two simultaneous backups are not supported, so you will need to wait for the first once to compete before triggering the next one.
  • The Azure VM agent and the Backup extension are required to achieve Application or File consistent recovery points. Otherwise, the recovery point will be crash consistent. Be careful of the Azure VM and Backup agents network requirements 
  • The ‘Backup now’ operation does not replace a ‘Snapshot’ mechanism if you want to rapidly restore a VM (The recovery point may take up to 8 hours to be available)
  • Currently, the Restore to a VM is not very customizable : You cannot choose a number of properties like Storage Container, VHDs names, NIC names … To have control of the created VM, you can restore the VHDs to a storage account and use a script or template to create a VM with the configuration of your choice.
  • There is no notification system built-in with Azure backup. So you can’t at this stage configure notifications for the backup jobs statuses. However, there possible alternate methods to do it : When Powershell will be supported, you can create automation scripts which get the Backup jobs statuses and make the notifications. You can also use the Azure Audit logs since the Backup operations are logged within them
  • No Powershell support, but will be released with GA
  • You cannot edit en exiting policy. If you want to change a policy, you will need to create a new one and change the VM’s assignment. Things will change by GA, so no worry
  • You cannot change the vault Redundancy type once you configured at least one backup. You need to change the redundancy  before any data is being transferred to the vault
  • There some limitations about the backup / restore possibilities, I will rephrase here the documentation
    • Backing up virtual machines with more than 16 data disks is not supported.Backing up virtual machines with a reserved IP address and no defined endpoint is not supported.
    • Backing up virtual machines by using the Azure Backup service is supported only for select operating system versions:
      • Linux: See the list of distributions that are endorsed by Azure. Other Bring-Your-Own-Linux distributions also should work as long as the VM agent is available on the virtual machine.
      • Windows Server: Versions older than Windows Server 2008 R2 are not supported.
    • Restoring a domain controller (DC) VM that is part of a multi-DC configuration is not supported.
    • For classic VMs, restore is supported to only new cloud services.
    • Restoring virtual machines that have the following special network configurations is supported through restoring disks to a desired storage account and using PowerShell to attach restored disks to VM configuration of choice. To learn more, see Restoring VMs with special network configurations.
      • Virtual machines under load balancer configuration (internal and external)
      • Virtual machines with multiple reserved IP addresses
      • Virtual machines with multiple network adapters

Azure Backup for Iaas V2 released on Public Preview

Update 2 : MS just confirmed me (but not published) that Azure Site Recovery is supported via the new portal, via Recovery Services

Update : MS released the official documents, I was just announcing here Smile


Great news for Azure IaaS V2 users (ARM). Yesterday, Microsoft announced the release of the Public Preview of Azure Backup for IaaS V2 via ‘Recovery Services Vaults’

This is a quick step be step to rapidly configure your VMs backup

NB : Azure Backup via Recovery Services Vaults will let you backup V1 and V2 VMs (Classic and ARM). It’s recommended that you will use it just to get your hands on and not for Production, since it’s not covered by any SLA or commitment (Preview). MS has not published guides to migrate existing Backup vaults to Recovery Services Vaults, but I think this is planned.

Let’s start:

Login to the Azure Portal (https://portal.azure.com). Go to Browse –> Recovery Services vaults


Click the Add + button


Type a Name for the RS vault, choose a Subscription, a Resource Group and a Region. You need to know that the Recovery Services vault in tied to a Region. you cannot Backup/Restore resources to/from a different region.


After the vault creation. you can discover the different options available. Just for Information : Recovery Services Vaults include Azure Backup services (VMs, Files, SCDPM) and ASR (Azure Site Recovery). ASR is currently on Private Preview and is not yet released. File and SCDP support we come soon too.

To configure a Backup, click on Backup +


Select the Backup type. As mentioned, only Azure Virtual Machine Backup is supported by now


You will now choose the Backup Policy. You can select an existing policy or create a new one.


The policy have the following options:

Name : Type a Name for your policy (Class1, Class2, Class3). Just a recommendation, Do not make naming like ‘Daily’ or ‘weekly’ since the retention may differ for two ‘daily’ based policies

Backup Frequency : There are only two options and a start hour. You can make Daily Backups or Weekly backups

Retention : This is great about Azure Backup since on the same policy you can configure your retention and  long term retention (Daily, Weekly, Monthly and Yearly!!)


Once the Policy is selected, you can choose which Virtual Machines to backup with this Policy. Note that Classic VMs and ARM VMs can be backed up with the same policy.


You can verify that the VMs selected are under the Backup Items blade, in addition to some other information like the Last Backup status, the Policy…


On the Backup Jobs Blade, you can find all the Backup Jobs of all VMs. You can change the period using the Filter Button


This just a teaser, more is coming, try it, you can ask me question on the comments, but as a reminder:

  • Do not use on Production, wait for the GA (Maybe 2 months)
  • ASR is not supported yet
  • A lot of enhancements are coming (User Experience mainly), stay tuned