Azure Virtual Machine Serial Access : Finally available

Hi all,

5 years after the first feedback request, Azure has finally added the Console Access feature to its Virtual Machine service.

The history ?

In the past, accessing a virtual machine was only possible via the network. Anything preventing you for accessing it (managing it) other than from the network path (ssh, rdp, remote powershell…) has dramatically bad impact –> Redeploy

For example :

  • You have enabled the firewall on the Virtual Macine –> Redeploy
  • You have a blue screen (that you can fix by changing a setting) –> Redeploy
  • Your screen is stuck on the ‘Please hit a key to  continue’ –> Redeploy

Today, Azure has added the feature of Serial console access, which means that you can access the Virtual Machine, just you were accessing it via the console port –> No need for network connectivity to the OS

This is a so waited and wanted feature that is currently on Public Preview, check it here :

Future improvements

  • Adding the F8 keyboard key support to handle accessing early stage booting screen
  • Adding RDP support to Windows because only cmd or powershell administration is provided today

Enjoy !!



Azure Networking Cross connectivity : The Options

Hi all,

I’m continually working on designing Cloud solutions, and specifically, Azure based Cloud solutions. One of the building blocks when starting dealing with Azure, is the Networking infrastructure that we need to build.

One of the challenges that we may and certainly will encounter is how to imagine the cross connectivity model, between the different networks. Cross connectivity can involve the following networks :

    • On-premises DC to Azure VNET
    • Azure VNET to Azure VNET
    • Azure VNET to Azure VNET on different region
    • ROBO to Azure VNET

1- The options

The following table shows the different options that you can use to cross connect different networks to Azure VNET :

Network Options
On-premises DC Express Route

Site to Site (S2S) VPN

3rd part S2S VPN


VNET Peering

Express Route

3rd part S2S VPN

Azure VNET Different Region VNET to VNET (VPN)

Regional VNET Peering

Express Route

3rd part S2S VPN

ROBO Site to Site (S2S) VPN

3rd part S2S VPN

2- Understanding the options

2.1- Site to Site VPN

The Site to Site VPN is a connectivity option that can be used to connect an Azure VNET to any network over internet, and using the VPN technology. The S2S VPN is the fastest way to establish a trusted private connection between your network and an Azure VNET.

The S2S VPN requires that you deploy an Azure VPN Gateway on the Azure VNET (An Azure managed gateway), and establish a VPN connection with a compatible VPN device on your side. The Azure VPN Gateway provides 99.9 SLA (under the hood, 2 VPN gateway instances in Active/Passive mode). You can in addition achieve a more resilient configuration with the Active/Active configuration (No published SLA)

a- Requirements and prerequisites

  • A VPN Gateway deployed on each VNET (2 VNETs can’t use the same VPN gateway)
  • A compatible VPN device on your side (Note that even if your device is not listed, it can be used as long as it supports the VPN configuration required by Azure)

b- Pro and Cons

Pro Cons
The fastest way to establish cross connectivity

No special configuration (VPN over internet)

A good solution for ROBO

No quality SLA (internet : latency, jitter…)

A maximum of 1.25 Gbps

c- Pricing

You will pay for :

2.2- Express Route

ExpressRoute is the Microsoft offer that enable the customer to establish a low latency, private and high bandwidth network connection to the Azure data-centers. Without entering the technical details, ER is a Layer 3 private connection to Azure networks, it travel through a dedicated circuit from your data-center to the Azure networks, without going to Internet.

ExpressRoute connectivity (Microsoft Credit picture)

ER is a Enterprise offer to customers that require high bandwidth and low latency connection to their Azure workloads. It can provide different bandwidth options, that can go from 50Mbps to 10Gbps, and an SLA of 99.9

a- Requirements and prerequisites

  • An ER Gateway deployed to your VNET (An ER circuit can be shared between different VNETs)
  • An Exchange/Network provider that can provide the connection to Azure ER

b- Pro and Cons

Pro Cons
High Bandwith, low latency, private connection to Azure

An ER circuit can be shared between different VNETs, providing both full mesh connection between the VNETs and a connectivity to on-premises for all VNETs

Can be extended to connect VNETs on other regions

The possibility to use Azure Microsoft peering (and Public Peering) to reach other Microsoft Services directly without going through internet. Services like Azure PaaS services (Web Apps, Azure SQL..) and Office 365 services…

Can take time to prepare and establish the circuit (weeks to months: contract with a network/exchange provider)

The cost can be significant for high bandwidth/unlimited tiers

Cannot be easily used for ROBO

c- Pricing

You will pay for :

  • The deployed ER Gateway (Per hour)
  • The outbound data leaving Azure in case you subscribed to the metered plan
  • ER Premium Add-on in case you wish to enable premium features like sharing the ER circuit with VNETs outside the geopolitical region when the ER circuit is established
  • The ER circuit

2.3- VNET Peering

VNET Peering is an Azure technology that allows you to link/peer/connect 2 or more Azure Virtual Networks, using few clicks and without deploying any additional resource. 2 peered VNETs are like a bigger VNET, that’s it. Imagine putting a wire between two networks, and start exchanging traffic between them, this is VNET peering.

VNET peering establishes a private, LAN-like connectivity between 2 or more virtual networks. Resources within the virtual networks will see each other just like they were on the same one.

a- Requirements and prerequisites

  • 2 or more Virtual Networks (Note that peeing VNETs in different regions are currently in preview, and not available on all regions)

b- Pro and Cons

Pro Cons
Easy to configure (few clicks)

LAN-like performance

Not negligible Cost (The pricing model is per volume, so not very predictable)

c- Pricing

You will pay for :

  • The data In and Out the VNET. For example, if you send 1 GB from VNET 1 to VNET 2, you will pay 1 GB leaving VNET 1 and 1 GB entering VNET 2

2.4- VNET to VNET

The VNET to VNET connectivity is just a S2S VPN between two VNETs, provided by the Azure VPN Gateways. It’s an additional cross connectivity option, that is cost-effective

a- Requirements and prerequisites

  • A VPN Gateway on each VNET

b- Pro and Cons

Pro Cons
Simple to configure

Cost-effective as traffic between 2 VNETs within the same region is free, you pay only for the VPN Gateways (per hour)


c- Pricing

You will pay for :

2.5- 3rd party S2S VPN

You can opt for  establishing a network cross connectivity using your own technology, by using a virtualized VPN device (Or a device that uses any tunneling protocol). By deploying a Virtual Machine where your software is running (must be supported by Azure like a Linux-based Virtual Appliance), you can establish a connection to your other networks and then route traffic to your VNET using Route Tables (UDR)

a- Requirements and prerequisites

  • A Virtual Network Appliance supported by Azure

b- Pro and Cons

Pro Cons
Keep your Enterprise technology High Availability : Most HA protocols are not supported on Azure like VRRP

Cost : That depends

Bandwidth/Latency : Traffic is over internet

Additional Management (Not a managed service)

c- Pricing

You will pay for :

  • The Virtual Machines you deploy
  • The outbound data leaving Azure

3- How to choose between the solutions

In a coming post, i will share a design i have recommended to one of my customers, showing one of the architecture that we can build, using Express Route and VNET peering. But this was for the context of that particular customer. Choosing which technology to use depends on many factors including :

  • Budget : We saw that ER and Peering are relatively expensive comparing the S2S VPN and VNET2VNET
  • Needs : I don’t need an ER between my ROBO and Azure if i have few data to exchange. But i need VNET peering if latency is mandatory between my workloads spread between VNETs
  • Time To Market : Establishing a S2S VPN is a way quicker than an ER circuit, so an emergency may leave you with no choice, at least for the short term

My recommendations

I can’t just recommend something without knowing the context and the needs, but in general, i see the picture like the following :

  • If you are in a Hybrid configuration for the mid/long term (More than 1 Year), then providing an Enterprise connection between your datacenter and Azure is crucial. East-West traffic requires high bandwidth / low latency connections, so ExpressRoute is the unique good choice.
  • If your workloads are spread between VNETs, and the latency/bandwidth matters, VNET peering is the best choice. If the connection quality is not mandatory, then you can opt for VNET2VNET connectivity, or you can share your ExpressRoute if it already exist.
  • The small and medium ROBOs can use S2S VPNs to connect to Azure. If the performance matters than an alternative architecture may see place. Establishing ER for a ROBO is neither practical nor cost effective. So you can opt for a hybrid architecture where all your offices are connected to a POP with high bandwidth / Low latency links, and an ER is linking the POP to Azure.

Setup Highly Available Network Virtual Appliances in Active/Passive mode on Microsoft Azure (Using Zookeeper)


This guide is a replacement or an alternative to the published article in Github ( that I consider not very clear, and that I’m certain have discouraged many of you implementing it on production.

1- Architecture

1.1- Solution components

The following picture shows an implementation of a 2 nodes Active/Passive NVA distribution.

Image credit : Microsoft

The architecture includes the following components :

Network Virtual Appliances

2 nodes are supported. You should create two instances of your NVA, which means 2 Virtual Machines. The virtual machines have to be deployed within an Availability Set in order to ensure better availability, and for the group to achieve 99.95 % SLA. With the announcement of Availability Zones preview, you can plan to deploy each NVA in a zone, in order to ensure zone failure resiliency within the same region and achieve a 99.99% SLA. The configuration is Active/Passive, which means that even if both nodes are up and can process traffic, only one node will receive traffic on the same time.

NB : If you need an Active/Active configuration, you can use the HA Ports, which is a feature of Azure Load Balancer standard. HA ports will allow you to ‘say’ to the ‘load balancer’ to load balance any traffic to the backend pool members. More information about HA Ports :

Zookeeper Nodes

Zookeeper is a centralized service that will automatically detect the failure of the active node, and switch the traffic to the passive node (by applying some configuration to your Azure resources : Public IP address, UDRs). The passive node will become active in this case. At least 3 Zookeeper nodes have to be deployed. The number 3 is the minimum required to achieve the ‘quorum’, which means that the decision made by the nodes have to be approved by at least 2 nodes. To keep it simple, 2 nodes are needed for high availability. But with 2 nodes, there are scenarios where the 2 nodes are unable to communicate with each other (whatever the reason is), and each one needs to make a decision (Node 1 wants to switch the Active/Passive nodes, Node 2 wants to keep the same configuration). In this case, we need a ‘judge’, this why we need to add a 3rd node, that will support either decision 1 or decision 2. This article related to Windows Server Failover cluster quorum can help you understand the principle ( Zookeeper is a light-weight service, so a minimal VM size like A1 is enough.

Pubic IP Address

A Public IP address resource (Static Public IP address) is recommended to be attached to the Active NVA node, to be used for outbound internet traffic and inbound traffic coming from internet. This is the edge of your network. The Public IP address will be moved from the Active node to the passive node by zookeeper, in case of an active node failure detection.

User Defined Routes (UDRs)

This is the heart of the solution. UDRs are a collection of routes that you can apply the Subnets. They are used to force the VMs within Subnets to send traffic to selected targets instead of the default routes. You can have many routes within a UDR.

1.2- Network Virtual Appliance interface counts

This is a ‘bigger’ topic than to be discussed on few lines, but I need to mention it so that the next sections be understandable. The picture above shows 2 NICs per NVA. The first NIC will be used for external traffic (Inbound and outbound), and the Public IP address will be attached to it. The second NIC will be used for internal traffic, all internal subnets will send traffic to that NIC. You can have designs when you have more NICs per NVA, for example, you want to create an Internal Zone (Communication between Subnets within the VNET), a Cross-Premise connectivity zone (Commination to and from on-premise), a DMZ zone and an External Zone. The NICs count will affect the VM size (Each VM size have a maximum supported NIC count, so keep it in mind when choosing the VM size)

1.3- Zookeeper nodes network configuration

The zookeeper nodes must be placed within the same subnet that one of the NVA’s NICs. Do not place it in a no direct Subnet (where an UDR is applied to reach the NVA), or place it on a Subnet that have the default routes to the NVA’s subnet. The zookeeper nodes need to continually probe one of the NVA’s NICs. For example, on the above picture, the zookeeper nodes are placed on the DMZ internal subnet, and they continual probe the NVA’s NIC that is on the DMZ internal subnet. Note that zookeeper will initiate a failover when the probed NIC stops responding, even if the other NICs still alive. For example, if somehow, a firewall rule was added and prevents zookeeper from probing the NIC, it will initiate a failover even if the NVA is alive, so be careful. In addition, if somehow, all the other NICs are dead and the probed NIC is alive, zookeeper will not initiate a failover. So choose a ‘port’ that is shared by the NVA core service, and that represents the NVA health state.

The zookeeper nodes needs to exchange heart-beat and metadata information continually. Each zookeeper node will listen on a port (that you can define during the configuration), and the other nodes will communicate with it on that port. The ports should be different between the nodes (On the example below, the ports are 2181, 2182 and 2183 respectively for node 1, node 2 and node 3). If you will enable the zookeeper node’s OS firewall, do not forget to permit communication over the chosen ports.

1.4- User Defined Routes configuration

The UDRs should follow a very simple rule, to be compatible with the zookeeper initiated failover. Each UDR must contains routes that route traffic to only one interface. This will allow zookeeper to deterministically set the next hope in case of failover. You can create multiple UDRs, each UDR points to an interface, and apply them to the subnets.

The picture on the left shows a bad UDR configuration (UDR-Bad), because the routes do not have the same next hop. The picture on the right shows a good configuration(UDR-Good).

2- How it works?

This zookeeper solution implementation has a very simple concept. Let’s first see what do we need to do in case of the Active node (Node 1) fails:

  1. Configure all the UDRs, or to be more specify the routes inside the Route Tables to stop routing the traffic to Node 1 NICs, and send the traffic to Node 2. This is possible by changing the next hop in each route a the Node 2 specific IP
  2. Attach the Public IP address to Node 2 ‘external’ NIC. This is possible by un-assigning the Public IP address from Node 1 ‘external’ NIC and assign It to Node 2 ‘external’ NIC

This is what zookeeper will make:

  1. Continually probe the Node 1 NIC to see if it’s alive
  2. In case of the probe fails, it will initiate the steps mentioned above, based on a configuration file.
  3. Continually probe the Node 2 NIC to see if it’s alive
  4. In case of the probe fails, it will initiate the steps mentioned above, based on a configuration file. But this time to failover to Node 1 if it’s alive

3- Implementation

This section will show you how to implement zookeeper into your infrastructure.

3.1- Prerequisites and requirements

In order to succefully implement zookeeper, you will need to validate all the following points :

  • Create an Azure AD application, to act as the identity used by zookeeper to make changes to your Azure resources during the failover. You will need to generate a certificate (pfx) during the application creation. This pfx will be converted later to a ‘jks’ format to be used by zookeeper. The Azure AD app will be assigned permissions on the resource that will be addressed by zookeepers during the failover
  • The nvadaemon-remote.json file
  • The file
  • The file
  • An Azure Template to deploy the zookeeper VMs / configurations: 2 files: template.json and param.json
  • Optional : A Deploy.ps1 that contains the powershell code to deploy the template


A- Create the Azure AD SPN

Follow this post to create an Azure AD SPN. Keep the pfx file and the password to be used later :

B- Assign the Azure AD App permissions on the Azure Resources

Zookeeper will use the Azure AD App identity to make changes on some Azure resources in order to make the failover ( The changes were discussed earlier in this post)

Give the Azure AD app the Contributor role on the following resources :

– The network Interface that Zookeeper will attach and detach the Public IP Address from

– The Route Tables that zookeeper will modify during the failover

Give the Azure AD app the Reader role on the following resources :

– The Public IP Address resource

The link in A- Create Azure AD SPN shows how to make roles assignments

C- Prepare the nvadaemon-remote.json

I highly recommend you to download and install the following tool to view and edit a JSON file :

The following picture shows a view of the nvadaemon-remote.json file

There are 2 main sections: zookeeper and daemon

C1- General Settings

Fill the required information like the following table instructions, the red parameters must reflect your environment :




This is a comma separated string. The format is :

zookpernode1:port1, zookpernode2:port2, zookpernode3:port3

Feel your zookeepers nodes names, and the ports to be used. You can keep use the default ports


This is the time interval between 2 successive probes (milliseconds)


How many retries before a zookeeper node considers the other node dead

A Zookeeper node will consider the other zookeeper node dead after : retrySleepTime x numberOfRetries.

In my example: 5 seconds


The Azure Subscription ID


The Azure AD App client ID (Object ID). This value can be copied from the created Azure AD Application that you have created previously


The Azure AD tenant ID. You can get this vaklue from the Azure Portal à Azure Active Directory à Properties (Directory ID)


The store where the key will be stored on the zookeeper container. Keep it to default : /nvabin/nva.jks


A password to protect the access to the Key store within the zookeeper containers.


The PFX certificate password defined earlier


This is the number of probes failure, after which Zookeeper will consider the Active node dead and initiate a failover.


This is the maximum time interval between 2 successive probes (milliseconds)

In my example, zookeeper will initiae a failover after : 3 x 3000 = 9 seconds


C2- Routing Settings

Route Tables

The routeTables section is an array. Each line is the resource ID of a route table resource. As discussed earlier, you should have a route table for each interface. A route table should note route to different next hops.

Public IP address

This is also an array with one line. On the name, put anything that identifies the public ip address (you can keep the default). On the id, paste the Public IP address resource ID


In this section, we will add the NVAs NICs resources. Because we have 2 NVAs, the array will contains 2 objects (0 and 1). 0 means the first NVA, 1 means the second NVA.

  • For each NVA, add all the interfaces that should be addressed by the route tables configured earlier. If you have configured 2 Route Tables, then you will need to add only 2 NICs.
  • Each interface have a name and id
  • For the NIC where the Public IP address will be attached, use the same name that was used for the Public IP address (In my case it’s pip-fw-test). This will allow zookeeper to know which NIC will be assigned the Public IP address during the failover. On the id, type the resource id of the NIC. This NIC private IP will be used by zookeeper as the next hope for the first route table (routeTable 0 à NIC 0)
  • The next route table will have as next hop the next NIC (in my case nic2) (route table 1 à NIC 1). The NIC name can be changed by must be equal between the NVA’s
  • Note that in my case, nic3 is not needed, as it will be not addressed by any route table. I should have remove it.
  • In the probeNetworkIntrface , type the id of the NVA’s NIC that zookeeper will probe to get the health state of the NVA.
  • In the probePort, type the port that will be probed by zookeeper

D- Copy the files to the Cloud Storage


All the files that will be used during the zookeeper installation should be available for the zookeeper VMs during the deployment. The recommended way to do it is to copy the files to a new/existing Azure Storage Account. So the prerequisite is to create a new storage account or use an existing one. Note that the files can be removed after the zookeeper deployment.

The files to be copied are :

  • Certificate pfx file
  • nvadaemon-remote.json



Go to the Azure Portal and select your Storage Account.

Go to Blobs

Create a new Container :

Name : installzook

Policy Access: Container


Go to the container and upload the file :

Go to File

Create a new File Share named : zookeepermonitor

The quota can be set to 1

Upload the files :

  • Certificate pfx file
  • nvadaemon-remote.json



E- Prepare the param.json file

The param file contains the parameters to deploy your zookeeper platform. Fill the following partamers:

  • Location : The region where to deploy the resources
  • adminUsername : the username for the Ubuntu zookeeper nodes admin user
  • adminPassword : The password
  • VmSize : the vmsize (Standard_A1_v2 is enough)
  • vnetName : The VNET where the zookeeper will be deployed (Must be the same VNET where the NVAs are deployed)
  • vnetResourceGroup : The VNET RG
  • AvailabilitySetName : A name for the availability Set
  • VmnamePrefix : The zookeeper nodes name prefix. The example shows zook0, so the VM names will be zook01, zook02 and zook03
  • PrivateIPs : The Private IPs of each zookeeper node
  • Subnet : The subnet for the zookeeper nodes. It must be one the NVA’s subnets
  • CERTPASSWORD : The password of the pfx file created earlier
  • CERTSTOREPASSWORD : the password to protect the certificate on the zookeeper nodes cert store
  • CERTNAME : the name of the pfx certificate file
  • SANAME : The storage account URL (where the files are stored)
  • SAKEY : the Storage account key
  • Customscripturl : the custom script URL
  • customScriptCommandToExecute : the command to execute (leave it unchanged, unless you have changed the script file name)


3.2- Deploy

Using Powershell, CLI, you can now deploy the 3 Zookeepers nodes:

New-AzureRmResourceGroupDeployment -Name DeploymentName -ResourceGroupName ExampleResourceGroup `

-TemplateFile “template file path” TemplateParameterFile “template parameter file path”



4- Download the files

You can download the files from this link :




















Get Azure Datacenter IP ranges via API V2


In my previous post, I showed how to create a light-weight Azure function that allows you to request the Azure Datacenter IP ranges via API. You can rapidly test it by following the instructions on the section 3- Try it before deploying
it here :

The feedback was positive, but a lot have asked for a way to see if there were updates compared to the last version, and what the updates are if any.

In this post, I will publish the second version of the API (with the how-to), that allows you to :

  • Get the current Azure Datacenter IP ranges
  • Get the current Azure Datacenter IP ranges for a specific region
  • Get region names (since, unfortunately, the region names published by Microsoft are not exactly the same used by the Microsoft Azure services)
  • New : Get the current version ID of the Azure Datacenter IP ranges
  • New : Get the previous version ID of the Azure Datacenter IP ranges
  • New : Get the difference between the current and the previous version for all regions
  • New : Get the difference between the current and the previous version for a specific region

The new features will allow you an easy integration with your environment, and simplify the update of your Firewall rules within your infrastructure.

  • 1- How to request the API ?

The API supports only POST requests. You can make the following API requests using the following body construction.

Here the examples using Powershell, but you can use any tool to request the API using the same body content


#Get the current Azure IP address ranges of all region

$body = @{“region”=“all”;“request”=“dcip”} | ConvertTo-Json

#Get the current Azure IP address ranges of a specific region, example europewest

$body = @{“region”=“europewest”;“request”=“dcip”} |ConvertTo-Json

#Get the azure regions names, that we can request IPs for

$body= @{“request”=“dcnames”} |ConvertTo-Json

#Post the request

$webrequest=Invoke-WebRequest -Method “POST” -uri ` -Body $body

ConvertFrom-Json -InputObject $webrequest.Content

#New in V2

#Get the (added and/or removed) IP address ranges updates of a specific region

$body = @{“request”=“getupdates”;“region”=“asiaeast”} | ConvertTo-Json

#Get the (added and/or removed) IP address ranges updates of all regions

$body = @{“request”=“getupdates”;“region”=“all”} | ConvertTo-Json

#Get the current Azure DC IP ranges version ID

$body = @{“request”=“currentversion”} | ConvertTo-Json

#Get the previous Azure DC IP ranges version ID

$body = @{“request”=“previousversion”} | ConvertTo-Json

#Post the request

$webrequest Invoke-WebRequest -Method “POST” -uri ` -Body $body

ConvertFrom-Json -InputObject $webrequest.Content

  • 2- How to build the solution ?

2.1- Solutions components

The V2 version is still using only Azure Functions, but unlike V1, it uses multiple functions within the Function App:

  • 1 Function App
    • Function 1
    • Function 2
    • Function 3
    • Proxy1
    • Proxy2
    • Storage Account

The following table details each component configuration. If you want to create the solution within your environment, create the same components using the given configuration:

Function App



App Service Plan

azuredcip This Function App will host the entire solution. It will include 3 functions, 1 Storage Account and two Proxies Shared or greater. Use at least a Basic Tier to benefit from SSL, Custom Names and backup







azuredcipranges This function will return you the V1 information.
HttpTrigger – Powershell
Allowed HTTP Methods : POST
Authorization level : Anonymous
You can add Function Keys if you want to secure the API Access. In my case, my API still Public (anonymous) to continue support my V1







azuredciprangesupdater This function will do the following :
– Get the current Azure DC IP ranges version and store it to the storage account

– Always keep the previous Azure DC IP ranges version file in the storage account

– Create and store a file containing the current and previous files difference and store it to the storage account

– Return the mentioned information based on the API request body

HttpTrigger – Powershell
  1. Inputs (Type : Azure Blob Storage)
Name Path SA connection
vnowinblob azuredcipfiles/vnow.json AzureWebJobDashboard
vpreviousinblob azuredcipfiles/vprevious.json AzureWebJobDashboard
vcompareinblob azuredcipfiles/vcompare.json AzureWebJobDashboard
  1. Outputs (Type : Azure Blob Storage)
Name Path SA connection
vnowoutblob azuredcipfiles/vnow.json AzureWebJobDashboard
vpreviousoutblob azuredcipfiles/vprevious.json AzureWebJobDashboard
vcompareoutblob azuredcipfiles/vcompare.json AzureWebJobDashboard

Keep the default http output

Allowed HTTP Methods : POST

Authorization level : Function

You can use the default function key or generate an new key. This API will not be directly exposed, so you can protect it with a key





triggerazdciprangesupdate This function will trigger the azuredciprangesupdater weekly to update the current and previous version TimerTrigger – Powershell
Schedule : 0 0 0 * * 3

(Each Wednesday, but you can choose any day of the week, as Microsoft will not apply the updates before one week of their publication)

Proxy 1



Root template

Allowed HTTP methods

Backend URL

getazuredcipranges This proxy will relay requests to azuredcipranges /getazuredcipranges POST
(add the key if you have secured it)

Proxy 2



Root template

Allowed HTTP methods

Backend URL

getazuredcipupdates This proxy will relay requests to azuredcipranges /getazuredcipupdates POST

Storage Account






This is the storage account automatically created with the Function app (it can have any other name) azuredcipfiles Upload the following files:
– vnow.json

– vprevious.json

NB : These files are fake files. During the first API update request, the vnow content will be copied to the vprevious file. The vnow content will be then replaced by the real current version. At this moment, you can only request the current version. After one week, a new version of the file will be published by MS, so another cycle will set the vnow and the vprevious to real 2 consecutive versions, so you can benefit from the update and comparison feature.

2.2- Download the files

You can download the needed files here, you will find the powershell code for each function (function1, function2, function3) and the two json files (vnow.json and vprevious.json).

NB : As mentioned before, after the first request to the API (getupdates), you will have a valid vnow version, but the previous version will be the version uploaded now. You need to wait at least 1 week to have the valid version.


Create an Azure AD service principal (Based on Azure AD App) secured by a Certificate

When working with Azure, you may need to create an Azure AD Application, to act a Service Principal and use it to run operation on Azure resources. This post will show you how to register an Azure AD application secured by a Self-Signed Certificate, all via Powershell. You can modify the third script if you want to create the application using an existing certificate. The used scripts can be downloaded from here

1- Create a pfx certificate

In order to the Azure AD App to be secured, a certificate needs to be created. You can prepare the following information to create your certificate :

  • common name (cn)
  • A password to protect the private key

The files Create-SSCv1.ps1 (for Windows 2008 R2/7) and Create-SSCv2.ps1 (for Windows 2016 /10) are powershell scripts that allow you to create a self-signed certificate.

Example using Create-SSCv1.ps1 (the DNS name replaces the common name)

.\Create-SSCv1.ps1 -DNSName zookeeperazure -Password P@ssw0rdzook -PFXPath c:\-PFXName 

Example using Create-SSCv2.ps1 (More control over some options)

.\Create-SSCv2.ps1 -SubjectName zookeeper -Password P@ssw0rd -PFXPath C:\temp -PFXName 
zookeeper -MonthsValidity 24 -FriendlyName zookeepernva

2- Import the Certifictate the windows Certificates Store

The file Import-CertToStore.ps1 will import the certificate to the personal Store, in order to be used to create the Azure AD App later. Provide the password used on the previous step

.\Import-CertToStore.ps1 -Path C:\temp\zookeeper.pfx -Password P@ssw0rd

3- Create an Azure AD application to act as a Service Principal Name

Use the script file Create-azureadapp.ps1 to create the Azure AD application. The Azure Ad Application should have the same name than the certificate CN, so that the script can work. You will be prompted to login to Azure

.\Create-azureadapp.ps1 -ApplicationName zookeeper

You can see now that an new Application has been added to your Azure AD registered application. Azure Portal à Azure Active Directory à App registration

4- Add the application to an Azure Role

Now that your application has been created, you can assign it to any Azure RBAC role. For example, I assigned the created application (zookeeper) the Reader role on the resource group RG-Azure

Get Azure Datacenter IP address ranges via API

Hi folks,

One of the struggles that we may face when dealing with Azure services, is the network filtering. Today, we consume a lot of services provided by Azure, and some Azure services can consume services from our infrastructure. Microsoft is publishing an ‘xml’ file that contains the exhaustive list of the Azure Datacenter IP ranges for all its public regions (No gov). This ‘xml’ file is regularly updated by Microsoft to reflect the list updates as they can add or remove ranges.

The problem is that consuming an ‘xml’ file is not very convenient as we need to make many transformations to consume the content. Many of you have requested that Microsoft at least publishes this list via an API so it can be consumed by many sources, using Rest requests. Until Microsoft make it, I will show you today how to create a very lightweight web app on Azure (using Azure Functions) that will ‘magically’ do the job.

NB : The Azure Datacenter IP ranges include all the address space used by the Azure datacenters, including the customers address space

1- Why do we need these address ranges ?

If you want to consume Azure services or some Azure services want to consume your services, and you don’t want to allow all the “Internet” space, you can ‘reduce’ the allowed ranges to only the Azure DC address space. In addition, you can go further and select the address ranges by Azure region in case the Azure services are from a particular region.

Examples :

  • Some applications in your servers need to consume Azure Web Apps hosted on Azure. In a world without the Azure DC address space, you should allow them to access internet, which is a bad idea. You can configure your firewalls to only permit access to the Azure DC IP ranges
  • If you are using Network Virtual Appliances on Azure, and you want to allow the VM’s Azure agent to access the Azure services (storage accounts) in order to function properly, you can allow access only to the Azure DC IPs instead of internet.

2- The Solution

In order to consume the Azure Datacenter IPs via an API, I used the powerful and simple Azure functions to provide a very light weight ‘File’ to ‘JSON’ converter. The Azure function will do the following:

  • Accept only a POST request
  • Download the Azure Datacenter IP ranges xml file
  • Convert it to a JSON format
  • Return an output based on the request:
    • A POST request can accept a Body of the following format : { “region”: “regionname”, “request”: “requesttype” }.
      • The “request” “parameter can have the value of :
        • dcip : This will return the list of the Azure Datacenter IP ranges, depending on the “regionname”  parameter. “regionname” can be :
          • all : This will return a JSON output of all the Azure Datacenter IP ranges of all regions
          • regionname : This will return a JSON output of the regionname’s Azure Datacenter IP ranges
        • dcnames : This will return al list of the Azure Datacenter region’s names. The “regionname” parameter will be ignored in this case
      • In case of a bad region name or request value, an error will be returned

3- Try it before deploying it

If you want to see the result, you can make the following requests using your favorite tool, against an Azure function hosted on my platform. In my case, I’m using powershell and my function Uri is

3.1- Get the address ranges of all the Azure DCs regions

#Powershell code

$body = @{“region”=“all”;“request”=“dcip”} | ConvertTo-Json

$webrequestInvoke-WebRequest -Method “POST” -uri ` -Body $body

ConvertFrom-Json -InputObject $webrequest.Content

 3.2- Get the address ranges of the North Europe region

In this case, note that we must use europnorth
instead of northeurope

#Powershell code

$body = @{“region”=“europenorth”;“request”=“dcip”} | ConvertTo-Json

$webrequest Invoke-WebRequest -Method “POST” -uri ` -Body $body

ConvertFrom-Json -InputObject $webrequest.Content

3.3- Get the region names

$body = @{“request”=“dcnames”} | ConvertTo-Json

#or #$body = @{“region”=”anything”;”request”=”dcip”} | ConvertTo-Json 

$webrequest Invoke-WebRequest -Method “POST” -uri ` -Body $body

ConvertFrom-Json -InputObject $webrequest.Content

4- How to deploy it to your system ?

In order to deploy this function within your infrastructure, you will need to create a Azure function (Section : Create a function app) within your infrastructure. You can use an existing Function App or App Service Plan.

NB : The App Service Plan OS must be Windows

After creating the Function App, do the following:

Step Screenshot
Go to your Function App and click the Create new (+)
In Language chose Powersell then select HttpTrigger – Powershell
Give a Name to your function and choose an Authorization level.

In my case, i set the Authorization to anonymous in order to make the steps simpler. We will see later how to secure the access to the function http trigger

Copy paste the following code on the function tab, then click Save

# POST method: $req

$requestBody = Get-Content $req -Raw | ConvertFrom-Json

$region = $requestBody.region

$request = $requestBody.request


if (-not$region) {$region=‘all’}

$URi =”

$downloadPage = Invoke-WebRequest -Uri $URi -usebasicparsing

$xmlFileUri = ($downloadPage.RawContent.Split(‘”‘) -like https://*PublicIps*”)[0]

$response = Invoke-WebRequest -Uri $xmlFileUri -usebasicparsing

[xml]$xmlResponse = [System.Text.Encoding]::UTF8.GetString($response.Content)

$AzDcIpTab = @{}

if ($request -eq ‘dcip’)


foreach ($location


if ($region -eq ‘all’) {$AzDcIpTab.Add($location.Name,$location.IpRange.Subnet)}

elseif ($region -eq $location.Name) {$AzDcIpTab.Add($location.Name,$location.IpRange.Subnet)}


if ($AzDcIpTab.Count -eq ‘0’) {$AzDcIpTab.Add(“error”,“the requested region does not exist”)}


elseif ($request -eq ‘dcnames’)


$AzDcIpTab = $



{$AzDcIpTab.Add(“error”,“the request parameter is not valid”)}

$AzDcIpJson = $AzDcIpTab | ConvertTo-Json

Out-File -Encoding Ascii -FilePath $res -inputObject $AzDcIpJson

Go to the integrate tab, and choose the following:

  • Allowed HTTP methods : Selected methods
  • Selected HTTP methods : Keep only POST

Click Save

It’s done !

To test the function, go back to the main blade and develop the Test tab

On the request body, type :


“region” : “europeewest”,

“request”: “dcnames”


Then click Run

You should see the results on the output, and the http Status 200 OK

Click Get function URL to get the URL of your function in order to query it via an external tool


5- Securing the access to the function URL

There are 3 options that let you secure the access to the Function URL:

5.1- Network IP Restrictions

I personally think that this is best option to secure the access to Function URL. IP Restrictions allows you allow only a set of Public IP addresses to access the Function URL. For example, if you have an automation script that requests the API and update a Firewall object or a database, you can whitelist only the Public IP address used by this automation workflow, which is the outbound IP address. This feature is available for Basic Tier App Service Plans and greater. It’s not supported for free and shared sku. Start using it by following this tutorial :

NB : The Networking blade for a Function App can be found by clicking on the Function Name à Platform features à Networking

5.2- Function Key and Host Key

You can restrict the access to the Function URL by leveraging an Authorization feature, by protecting querying the URL  via a ‘Key’. There are two Key types : The Function Key which is defined per Function and a Host key which is defined and the same for all the functions within a Function App. In order to protect a function via a Function key, do the following :

Step Screenshot
Go to the Integrate blade, and change the Authorization level to Function
Go to the Manage blade. You can use the default generated key or Add a new function key. Generating a new key is provided for keys rotation
Now, you can access the ‘protected’ URL by clicking on the Get function URL on the main blade. You can select which key to use.


5.3- Authentication and authorization feature in Azure App Service

This feature allows you to secure your Function App by requesting the caller to authenticate to an Identity Provider and provide a token. You can use it against Azure Active Directory or Facebook for example. I will not detail the steps in this post, but here’s some materials :

Overview :

How to configure your App Service application to use Azure Active Directory login :