New Azure Portal Feature : Find your Quotas and Limits values !

Hello All,

This is a quick post to support a new fresh Azure Portal feature which will help a lot of Admins in some cases.

You all know that you cannot create as Azure Resources as you want, and that there are Limits and Quotas for the number of deployed resources. Such information is very important and I can say crucial when designing your Azure infrastructure.

I can note some examples like :

  • Network Security Groups : By default, you cannot create more than 100 NSG objects within an Azure Region (Azure Resource Manager limit model, ASM limit model is per Subscription not per region). So if you are using NSGs to secure your environment, you will need to track the objects count usage –> This is the object of this post
  • Static Public IP addresses : By default, you cannot create more than 20 static Public IP addresses within an Azure Region. So monitoring and tracking this resource usage is important

You can always visit the official link for the last information about the service limits, the quotas and the constraints. Keep in mind that for several resources, you can ask the Microsoft Support to increase a limit value.

Back to the post main goal, you can by now consult the usage of your resources and the status against the quota values.

Go to the Azure Portal (Portal.azure.com) –> Subscriptions –> Select the Subscription –> Usage + Quotas

image

You can filter the items to have more customized view. You can use the link to directly open a Support case to increase the limits.

Advertisements

How to edit an existing Azure Custom RBAC role ?

Hello all,

Azure provides the ability to create Custom Roles in order to better fit the needs and give admins more flexible ways to choose the permissions they want to provide to users.

Many posts discuss the Azure RBAC and custom roles, here’s some materials:

In this post I will clarify the right method to modify an existing created custom role.

When you create a custom role, you configure many parameters:

  • Custom Role Name
  • Custom Role description
  • Custom Role Actions
  • Custom Role No-Actions
  • Custom Role assignable scopes

There are some scenarios where you would like to change one or more of the definitions, for several reasons:

– You already created a custom role assigned to only some scopes. You want to extend or reduce the scopes

– You decided to add or remove an Action or a No-Action to an existing custom role

– You noticed a typo on the description and you decided to change it

– And more reasons can come…

How to proceed ?

This step by step is using Azure Powershell, so download and install Azure powershell before proceeding. (Download and Install Azure powershell)

As an example, i will make several changes to the Azure Custom Role “Azure DNS Reader” that initially has the scope at the subscription level “/subscriptions/1111111-1111-1111-11111-11111111111”. The changes are:

  • New Name –> Azure DNS Zone RW
  • Change the description –> Lets you view and modify Azure DNS zone objects”Add or remove an Action –> Microsoft.Network/dnsZones/write”
  • Add or remove a No-Action –> Microsoft.Network/dnsZones/write”
  • Add a remove a scope –> “/subscriptions/222222-2222-2222-2222-2222222222222”

1- Login to Azure

Login to Azure using the following command:

Login-AzureRmAccount

2- Get the Custom Role Definition :

  • If your custom role is assigned to the default subscription : $CustomRole = Get-AzureRmRoleDefinition -Name “Azure DNS Reader”
  • If your custom role is assigned to a scope : $CustomRole = Get-AzureRmRoleDefinition -Name “Azure DNS Reader” -Scope “/subscriptions/1111111-1111-1111-11111-11111111111”

2017-02-08_13-49-41

3- Make changes* and commit

*Note that you can make all the changes and commit during last step

A- Change the role Name
$CustomRole.Name = “Azure DNS Zone RW”
$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-10-36

B- Change the role description
$CustomRole.Description = “Lets you view and modify Azure DNS zone objects”
$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-12-28

C- Add or Remove an Action
$Action = “Microsoft.Network/dnsZones/write”

$CustomRole.Actions.Add($Action)
#or to remove
$CustomRole.Actions.Remove($Action)

$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-31-48
D- Add or Remove a No-Action
$NoAction = “Microsoft.Network/dnsZones/write”

$CustomRole.NotActions.Add($Action)
#or
$CustomRole.NotActions.Remove($NoAction)

$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-34-59
E- Add or Remove a  Scope
$Scope = “/subscriptions/222222-2222-2222-2222-2222222222222”

$CustomRole.AssignableScopes.Add($Scope)
#or
$CustomRole.AssignableScopes.Remove($Scope)

$CustomRole | Set-AzureRmRoleDefinition

2017-02-08_14-45-53

How to protect and backup your branch-offices data and workloads ?

Hi all,

This is a rapid post where I will share one of my last experience during a customer call for advice.

The customer have several branch offices (Tens). In each site, a ‘big’ server is deployed where several Virtual Machines are running to provide ‘vital’ infrastructure like :

  • Active Directory Domain Controller (RODC) + DHCP + DNS + Printer services
  • File Server
  • SCCM Distribution point

The question was arisen when we studied some DR and Service continuity scenarios : The branch offices workloads were under the scope, but the priority was very low, and the question was : How can I minimally protect the branch offices data with 0 investment ?

This is wasn’t a very difficult question, and the answers were like the following :

  • AD + DNS + DHCP + Printer Services :
    • AD services : When the RODC is not reachable, clients automatically contacts the primary domain controllers on the main site (Through S2S VPN or MPLS). This is a built-in AD service  –> Solved
    • DNS : The secondary DNS servers configured via DHCP are the main site DNS servers —> Solved
    • DHCP  : This is a vital service, without DHCP, clients will not obtain IP addresses and will not be able to work. The solution was to configure (since Windows Server 2012) a Hot-Standby failover relation ship with the main site. The branch-offices network device must only support IP-helpers –> Solved
  • SCCM DP : The SCCM distribution point helps providing deployed packages from a near place (50 clients downloading an Office 2016 package (1 GB) or Windows updates from a local server is better than over a VPN connection. Just like domain controller, if a client is not able to reach the ‘nearest’ DP server, it will contact the next one, which can be the main site DP –> Solved
  • File sever : This was the hardest question. How can we protect the file servers data and rebuild them on case of disaster, data loss or anything similar ? Let’s discuss this case more deeply

The file Server history

The file server is not stateless

What differs the file server from the other servers is that it contains changing data. In case we loose this data (data loss, ransomware, accidental deletion…), there is no built-in way to recover it

Availability or Recovery ?

There are two wishes against a file server data :

Availability : This is the need of accessing the data even if the File server goes down

Recovery : This is the need to recover the data when needed. The data recovery can be when rebuilding the server (In case of server loss) or to recover a set of files/folders as part of an Item-Level-Recovery (Deleted files, old version, ransomeware…)

The file server solution

Faced to both needs, I proposed the easiest way to achieve each need:

Availability : The easiest way to achieve availability for file servers (In case of Branch offices, minimum infrastructure) is to enable DFS-R and DFS-N. DFS-R will replicate your files to another server on the main site. DFS-N will be used to create a virtual view of shared folders permitting using the same UNC path to land on the Office’s file server and in case of failover, to land on the main site file server (where replicated files reside). This solution is very simple to be implemented. The main site server can be a target for multiple offices. The requirements are Office-MainSite bandwith and main site storage

Recovery : When we say recovery, we say Backup. The challenge was to find a ‘simple’ backup solution that :

  • Backup the shares
  • Restore the files using an Item Level Restore mechanism (User and Admin experience)
  • Does not use local storage as the office’s infrastructure is limited (In addition that local storage does not protect against site disaster)

I was very lucky when this ‘small’ challenge was requested since I was aware of the Azure Backup MARS agent experience.

Why I was lucky ?

Backing up (and restoring data) via the Azure Backup MARS (Microsoft Azure Recovery Services) agent is very interesting in this case for several reasons:

  • Deployment Simplicity : In order to backup data, you will need just to download the MARS agent, install it, and choose what and when to backup, and where the data should be backed up
  • No infrastructure : You don’t need to deploy a backup infrastructure or provide local storage. The MARS agent supports Azure Cloud storage via the Azure Recovery Vaults. A Recovery Vault is a Backup Cloud space that you need to create first (One for each file server, one for each region or one for all) and then provide it during the backup configuration wizard.
  • Item Level Restore : The Admin can easily make an Item Level Restore of backed up items
  • Limitless capacity and retention :  Azure Recovery services provides limitless storage and retention periods (up to 99 years)
  • Encrypted backup data : The data backed up to the cloud are encrypted using a key you only know.
  • Management from the cloud : The management of the operations (Backup operations, jobs, consumed storage, registered servers, notifications, alerts…) is easily done from a single portal, the Azure Portal Azure Backup MARS  agent experience

backup-process

Backup using MARS agent steps (Microsoft credit picture)

What else ?

All the requirements were met. The backup solution fits the needs and has a very small TTM (Time To Market)

Conclusion

If you are facing the challenge of protecting branch-offices data (connected to a main site) then do not hesitate to use ‘simple’ ways to achieve it on order to simplify your architecture and to optimize costs. Use Azure Backup to protect any workload (Even Linux is supported) and to guarantee that your data are safe on a remote location. The following table resumes my post :

Workload

How to ensure availability or recovery

Active Directory Domain controller

The failover to another DC is buit-in

DHPC

Windows Server 2012 (and later)  DHCP failover

DNS

Secondary remote DNS servers

File Server

  • Availability : DFS-R + DFS-N
  • Backup/Restore : Azure Backup via MARS agent

Understanding Log Analytics and OMS licensing

Hi all,

At Ignite, Microsoft announced many news about OMS, including an new way to purchase OMS Log analytics. This created a lot of frustration since it’s not straight forward to understand the new licensing model, and which model is suitable for the customer.

In this post, I will try to explain the new licensing model, including recommendation and simulation tool (Excel sheet) to simulate and compare Log Analytics costs for each model. I will in addition, explain the new OMS offers

NB : All pictures are Microsoft Credit

1- OMS services categories

The first change is a sort of classification of the services offered via OMS into 4 categories, depicted via the following picture. Microsoft calls them service offerings

image

The 4 categories or service offering are:

  • Insight and analytics
  • Automation & Control
  • Security & Compliance
  • Protection & Recovery

Each category includes a set of services and features. The thing that we can notice from the first look is that :

  • Log Analytics is now a service from the services provided via Insight and analytics, and is not including all the solutions as we can see
  • Automation minutes, Azure backup and ASR instances can be purchased via a service offering

2- How can I purchase Log Analytics

When creating an OMS Log Analytics workspace, you have the choice between 3 tiers :

2.1- Free tier

Microsoft provides a free tier in order to test some OMS features. The free tier provides up to 500MB per day ingestion, storing data for 7 days. Automation offers up to 500 minutes per month for free, and up to 5 nodes of Desired State Configuration per day.

 

2.2- Standalone tier

The standalone tier provides only the Log Analytics services, which are depicted on the following picture. The pricing follows the old model, which is volume based. The more you upload and retain data, the more you pay.

image

NB : Note that services like Network Performance monitor and Security and Audit are not included, and cannot be used under this tier

2.3- OMS tier

The OMS tier will allow you to choose which Service Offering or Service Offerings to include on your workspace, which allows you to enable the benefits of each Service Offering.

With the OMS tier, you can choose to include from a single to 4 Service Offerings, depending on the needs and on the budget (We will see more information later on this post)

Note that the OMS tier licensing and cost is different from the Standard one since it’s a mix of  included capacity (Included on the price) and Pay-As-You-Go capacity.

2- Logs retention period

As you can see, OMS is no longer a Log collector/Analyzer exclusive service, but can include other services like Automation minutes, Backup and Replication.

For the ‘Log’ service, you may wonder about the the retention period that OMS provides today. Today, OMS can provide a retention period up to 2 years, with 1 day granularity. Which means that you can configure your workspace (Standalone and OMS, not the free which provides a 7 day fixed retention period) to retain data on a range between 30 and 732 days.

The following points are very important when deciding about the retention period value:

  • The retention period is Workspace wide, which means that it’s applied on all logs within the workspace. You cannot choose a retention period per log type or per solution
  • The OMS and Standalone tiers includes a default retention period of 30 days. If you change this value to another value (which is greater), charges will apply consequently.
  • When you change the retention period from a greater value to a lesser value (for 732 to 356 for example), OMS will drop all the logs and collected information beyond the 356 days, and you will pay less for the next months.
  • The previous Log Analytics tiers (Standard and Premium) are no longer available for purchase. Their retention period are fixed and cannot be changed.

 

3- OMS Licensing and cost

The new Licensing model is different form the previous one when you multiply the uploaded Gigabytes by the PerGB cost. With this model, a smart calculation should be done to finally estimate or get the final cost.

3.1- Explaining the licensing model

In this section, I will explain the licensing of the today provided OMS tiers:

  • Standalone
  • OMS

3.1.1- Standalone Tier

The Standalone tier, as discussed earlier in this post, will allow you to benefit from the ‘Log Analytics’ services only. The ‘Log Analytics’ includes the following services :

2016-12-07_22-06-45

The Standalone tier follows the next cost model:

  • The default retention period is 31 days. You will pay what you upload at a fixed PerGB cost (2.3 $ per GB). For example, at the end of the month (If we started uploading at the beginning of the month), you will pay to data stored on the workspace multiplied per the PerGB cost. Example : At the end of the month, you have uploaded 30 GB of data –> Total cost = 30 * 2.3 $ = 69 $
  • If you change the retention period to a greater value, you will be charged an additional 0.1$ per GB for each additional month. Because the retention is a ‘Per Day’ increment, it’s more precise to say that you will be charged an additional (0.1/31) per GB for each additional retention day.

This lead to a formula to be applied (Note that an Excel Spread Sheet is attached to this post, which allow you to make a cost estimation): Note that the formula I’m exposing here shows what you will totally pay at the end of the retention period ie the total cost of sending logs under the Standalone Workspace over a retention Period

Total Cost = UploadedGBperMonth * RetentionPeriodinMonths * BaseCostPerGB + UploadedGBperMonth * (RetentionPeriodinMonths-1) * AdditionalRetentionCostperGBperMonth

= UploadedGBperMonth * [RetentionPeriodinMonths * BaseCostPerGB + (RetentionPeriodinMonths-1) * AdditionalRetentionCostperGBperMonth ]

where :

  • UploadedGBperMonth : Is the total uploaded data in GB per month
  • RetentionPeriodinMonths : Is the configured retention period for this workspace (Example : 1 month, 12 months or 8.2 for 250 days)
  • BaseCostPerGB : Is the cost per uploaded GB during the first month
  • AdditionalRetentionCostperGBperMonth : Is the cost per GB for the additional retention month

Example

– Consider you have 200 reporting entities (Network equipment, servers)

– Each entity generates an average of 300 MB per day sent to OMS

– The retention period is fixed to 10 months

– The base cost per GB is : 2.3 $ per GB

– The additional cost per GB for an additional retention period of 1 month : 0.1 $ per GB

Total cost = 200*(300/1024)*31 * [10 * 2.3 + (10-1)*0.1] = 43412 $

3.1.2- OMS Tier

The OMS tier licensing is quiet different from the Standalone tier, it works like the following :

  • The licensing is Node based, which means that you pay per reporting node.
  • Each node license includes a ‘default usage’ which depends on the provided services
  • When the usage crosses the ‘default usage’ limit, you will pay the additional usage
  • The OMS tier includes more than the ‘Log Analytics’ services. Services like Automation and Backup.
  • The services where ‘data is stored’ has a default retention of 30 day. The retention period can be changed up to 2 years with additional cost
  • A license is exclusive to the node, which means that  you cannot for example  use the same license to ‘collect logs from 1 node’ and backup another node

A- Licenses count

One can ask how much license do I need to cover my needs. And the answer is : It depends on which category (Service offering) your node will benefit.

For example : If you link 10 nodes to a workspace, and you configure this workspace to collect some Windows event logs, you deploy the ‘Security and Audit’ Solution and the ‘Network Performance monitor’. If you look to the services provided by the categories (First picture on this post), you can deduce the following :

  • The Windows event logs are under the ‘Log and Analytics’ sub category of the category Insights & Analytics –> 10 Insights & Analytics licenses
  • The ‘Security and Audit’ solution is under the Security & Compliance category –> 10 Security & Compliance licenses
  • The ‘Network Performance monitor’ solution is under the Insights & Analytics category –> 10 Insights & Analytics licenses (Already acquired)

——> You will need to acquire 10 Insights & Analytics  and 10 Security & Compliance licenses

B- E1 and E2 licenses

You can reduce the cost of acquiring licenses, by acquiring E1 or E2 licenses when possible:

  • A E1 license = Insights & Analytics + Security & Compliance + System Center 2016
  • A E2 license = All categories + System Center 2016

2017-01-04_21-53-24

On the example given on A- Licenses count, we could acquire 10 E1 licenses instead of 10 Insights & Analytics  and 10 Security & Compliance licenses

C- Included advantages

When you license a node under OMS, you will benefit from the following included advantages:

  • Insights & Analytics
    • 500 MB per day of uploaded data (logs)
    • 31 days of retention

—> When you cross these limits, you will be charged like the following:

    • For each additional GB beyond the 500 MB per day, you will pay a PerGB cost (~2.3 $ per GB)
    • For each additional retention period (beyond the 1 included month), you will pay a PerGBperMonth cost (~ 0.1 $ per GB per Month)
  • Automation & Control
    • Unlimited automation minutes for out of the box solutions
    • 10 minutes per day per node for custom runbooks
    • A DSC node for each license (A DSC node means a node managed by DSC)

—> For each additional minute beyond the included time, a PerMinute cost is charged (~0.002 $ per minute)

  • Security & Compliance
    • The same Insights&Analytics principle for ingested Security and Audit data
    • An Azure Security Center managed node per license
  • Protection & Recovery
    • A license includes the right to Backup a node using Azure Backup (With 500 GB of storage) and to be protected via ASR. Additional storage will be charged (for Backup beyond 500 GB and for ASR)

4- Purchasing services without OMS

Like I showed how we can acquire ‘Log Analytics’ not via OMS but via the standalone mode, here the modes you can acquire the other services, in a Standalone mode too.

NB : This section is a copy/paste from the OMS Licensing Microsoft public document, to avoid paraphrasing something already clear Smile

4.1- Automation

Automation is available in Free and Basic tiers. Automation offers a subset of the features offered in Control & Automation. It will not include Change tracking or Update Management. Billing is based on the
number of job run time minutes used in the month. Charges for process automation are incurred whenever a job runs. Job minutes are aggregated across geographies.

2017-01-04_21-13-54

4.2- Backup

The price of Azure Backup is dependent on the size of each protected instance. Azure Storage is a separate charge. Customers have the flexibility to choose between LRS or GRS Block Blob Storage, and benefit from cool storage

2017-01-04_21-15-46

4.3- Site Recovery
Azure Site Recovery is billed based on number of instances protected. Every instance that is protected with Azure Site Recovery is free for the first 31 days, as noted below

2017-01-04_21-18-11

4.4- Desired Stage Configuration
DSC is available in Free and Basic tiers. DSC offers a subset of the features offered in Control &Automation. It will not include Automation, Change tracking or Update Management. Billing is based on
the number of nodes that are registered with the service. Charges for Automation DSC start when a node is registered with the service and stop when the node is unregistered from the service. A node is any machine whose configuration is managed by DSC.

2017-01-04_21-19-11

 

5- System Center and OMS

Microsoft released the possibility of joining the OMS and the System Center licensing under a same plan. We already seen that in “B- E1 and E2 licenses”, where E1 and E2 pans include System Center 2016.

And, if you are already a customer of System Center, covered by Software Assurance, you have two other options:

– If you are on the middle of a multi-year System Center agreement, you can purchase an E1 or E2 add-on which allows you to extend the node to use OMS services, and which is naturally cheaper than purchasing E1 and E2 licenses separately.

– If you are planning to renew your System Center agreement , then you can acquire an OMS subscription for System Center subscriptions, which are also cheaper than acquiring licenses separately.

2017-01-04_22-05-04

6- Do I purchase a service via OMS or on a Standalone mode ?

This is the question all of you, and my customers are asking : If I decide to use a service, is it better to acquire it via an OMS plan or a Standalone plan ?

The answer is : This is mathematics ! You can calculate the total cost via a standalone purchase or a standalone purchase, and then you you can compare.

Here is some points that are generally applicable :

  1. If you are aiming to use more than one service, then it’s cheaper to acquire these services via OMS
  2. If you are aiming to use some services which are not provided on a Standalone mode (like Service MAP, Security and Audit…), then you need to quire this or these services via OMS
  3. If the OMS license includes initial quota (Like Log Analytics, Automation, Security and Audit), then starting from a consumption rate, it’s is more interesting and cheaper to acquire the services via OMS

7- Log Analytics Standalone vs OMS Insights and Analytics

In this post i’m  sharing an excel sheet which allows you to compare the cost of acquiring Log Analytics via the Standalone mode or via Insight and Analytics service offering : Download it here : Log-Analytics-Cost-Calculator

8- Useful links

Many useful links are available today, you will find here the more interesting :

1- OMS Licensing official material

2- Understanding OMS

 

Azure TCO calculator Public Preview

Hi,

Microsoft just annouced the public preview of its TCO calculator tool.

This tool will help you see if Azure will reduce or not the TCO of your Virtualization or Physical on-premises platform. It will give you a cost forcast on a 3 years period. This is a valuable tool for organisations planning to renew their on-premises infrastructure or a part of it. 

This tool is a must for Cloud Architects, consultants or IT managers.

Give it a try, i will update the article with the feedback links.

https://www.tco.microsoft.com 

Azure Virtual Machines single instance SLA, a big step!

Hi all,

Early this week, Microsoft made an exciting announcement with its SLA for a single virtual machine of 99.9 ℅ : https://azure.microsoft.com/en-us/blog/announcing-4-tb-for-sap-hana-single-instance-sla-and-hybrid-use-benefit-images/

Before this announcement, single instance VMs (which are not part of an Availability Set) were not covered by an SLA. This was unattractive for workloads which do not support, afford or need a multi-instance deployment. This was generally applicable for legacy workloads and to be honest, to the majority of non-critical workloads, and for SMB workloads which do not afford investing in redundancy and HA.

Many of my customers avoided migrating workloads to Azure, just because of this, which was offered by AWS since a while.

With this announcement, customers have the warranty that an SLA of 99,9 % provided to their VMs, which means a maximum downtime of 8.76 hours per 356 days –> 44 minutes per month

Do not forget that this is only applicable for virtual machines with all disks stored on Premium Storage.

Optimize costs of your Azure Virtual Machines

Hi,

This is a very common topic which was discussed on many blog posts and forums. How to optimize the costs of the Azure virtual machines ?

There are many aspects of optimization, the first one which is also applicable for on-premises VMs is the size of the VM (Hardware optimization). If you track your Virtual Machine resource usage, you can see if your VMs are oversized, and then decide to resize them to reduce their costs. But this optimization, even if valuable, is a one-time optimization.

This is why I’m discussing here a better aspect of optimization, which is related to the uptime of the virtual machine.

The Stopped/Deallocated VM state

When you power off an Azure virtual machine, there are two possible states : Stopped and Stopped/Deallocated. The short story is that if you shutdown your VM to the Stopped state, your VM will be billable even if it’s actually stopped. But if you shutdown your VM to the Stopped/Deallocated state, your VM will not be charged during this downtime.

What is the difference, why not always use the Stop/deallocated mode ?

Many blog posts discuss the difference between both statuses. Here the key differences for ARM virtual machines:

– If you stop your Virtual Machine from inside the OS, or using a powershell/cli/api request without the Deallocate flag, the VM will be stopped but will not be de-provisioned from its Azure host (Hyper-V host). When you start your VM, it will start rapidly, and will keep all its network dynamic parameters (NIC IP Addresses)

– If you stop your virtual machine from the Azure portal, or using a powershell/cli/api request with the Deallocate flag, the VM will be sopped and de-provisioned from its Azure host (Compute resources). When you start your VM, Azure will redeploy the VM (the same VM Smile ) and you will notice a longer time for the VM to start. The VM will theoretically  have  different IP addresses if static IP addresses were not used.

What to optimize ?

You can, based on this property, schedule the Stop (Deallocated) and Start of your virtual machines that allow a downtime window during a period of time.

Examples

  • Test/Dev/Int/Rec virtual machines can be stopped during the week nights and during the weekends.
  • Virtual machines which are just used on a defined period of time can be stopped when not used
  • Even production virtual machines can be stopped if they are not used (A user file server can be stopped during the weekend)

Cost optimization gain example

This example is based on an existing SMB customer. This customer is planning to move all the non-production virtual machines to Azure IaaS. The first wave will include around 200 VMs, with the following  sizes repartition:

VM Count

Azure Size

Cost per month (744 hours)  (Windows based, NorthEurope)

Cost per month : VMs are stopped between 8PM and 7AM and Sunday ()

49

A1

€2,766.90

€1,305.35

56

D1v2

€4,286.50

€2,022.26

70

A2

€7,905.43

€3,729.58

26

D2v2

€3,980.32

€1,877.81

Total

€18,939.16

€8,935

Results:

  • A total gain of around 10K€/month –> 120k€/year
  • A cost reduction of around 55 %

How to implement it ?

There are many ways of achieving this goal, the more suitable is to use an Automation mechanism which Stop and Start the virtual machines based on a schedule.

The most suitable automation mechanism is Azure Automation, which can be used easily and without deploying any infrastructure.

There are many community participations to achieve this goal, but I like more the one published by Microsoft which give a more customizable downtime window per virtual machine, and using the ARM tags.

Here the link : https://docs.microsoft.com/en-us/azure/automation/automation-scenario-start-stop-vm-wjson-tags